Anthony Nguyen, a Security Solutions Architect at Amazon, spoke with Sam Stuber from the NexGenT Career Services team about his career in IT and cybersecurity. At the end of the interview, students had the chance to ask him success questions.
While the exclusive hour-long interview for NexGenT students was packed with helpful tips where Anthony shares about his own experience, this condensed highlight real offers a plethora of brilliant job searching tips.
In this insightful interview, Anthony shares the following things:
Triumphant steps Anthony took to get where he is at Amazon
How failures can be opportunities
Bots vs. people reviewing resumes and how to optimize for IT and cybersecurity
The IT hiring process, both technical and non-technical, and what you should know to prepare yourself
Sample interview questions
What to do when you don’t know the answer to an interview question
What hiring managers are looking for, including the T shape career development model
Additionally, here are some of the amazing resources that Anthony mentions and references:
How to Interview at Amazon: While this guide is specifically written by Amazon, for Amazon job seekers, it’s extremely applicable for any job you are preparing for. Included are sections on behavioral-based interviewing, the STAR answer format, tips for great interview answers https://www.amazon.jobs/en/landing_pages/in-person-interview
101 Cybersecurity Slides: While Anthony took the time to share exclusive knowledge with NexGenT students, he has also mentored students in the past at places like UC Irvine. He shared with us the slides he uses to help students understand the different paths into IT and cybersecurity. Link to InfoSec 101: How to be a Cybersecurity Expert Slides Here
Coding Interviews & Challenges: Use websites like Leet Code to help you enhance your skills and prepare for technical IT interviews.
Ready to reach out to technical IT recruiters? Check out our step-by-step guide with email templates.
NexGenT was founded on the belief that education is for everyone and that the ideal educational system should be based on real-world skills training. The company ethos includes the belief that education cannot leave people in debt with degrees that do not teach the real skills needed to succeed, and that it should prepare people to be ready for the workforce. We want our students to be field-ready after completing our program, similar to how we trained network engineers in the military.
This is how we view proper training – it should actually prepare folks for a real job and give them tangible skills that are necessary to do the job (I know, crazy right!?). However, we find that traditional College education is lacking in technical fields of study such as information technology. What is needed in IT are people who can do the job, not people with a head full of concepts with no application.
For this reason, people mostly get hired based on their skills and certifications. There are not enough college programs that teach the necessary technical skills and to make things even worse, traditional institutions leave their students in massive amounts of debt. So, it’s important to highlight this issue of College debt, and discuss alternatives to traditional academia, but alternatives that actually provide the education needed for a great career.
Last year, more than 20 million students attended College or University, and 70% graduated with a significant amount of student loan debt. The national student debt is nearly $1.5 Trillion, collectively held by around 44 million Americans. This figure is truly unsustainable, and there must be change. The average student debt is around $37,000, and that significant amount of money could have been otherwise invested somewhere else.
At NexGenT, we provide real-world skills training for a fraction of the cost of College. Students graduate in just months instead of years and gain sought after skills without the burden of large amounts of debt. This is the kind of thinking we will need in order to fill the millions of tech jobs that will be open in just the next couple years.
And, don’t only take it from us – we created a short video with raw footage from some of our students who were inspired to share their stories and discuss this topic. The video starts with a question about College and then students share their authentic stories providing genuine insight into the value of alternative educational programs and the mission at NexGenT.
The underlying concept of Edge Computing is simple: it’s the idea that pushing “intelligence” to the edge of the network will reduce the load on the core servers and on the network itself, and make the network more efficient overall.
Edge Computing, like many other technological advancements, arose as a response to an issue. The issue? A massive overload on servers caused by a “surplus” of data generated by networked devices incapable of making decisions on their own.
If you’ve read The Hottest IT Trends Of Our Time, you’d probably recall the security camera example. In this example, I refer to a scenario in which you have a few security cameras recording information and sending it to a centralized server as long as they are turned on.
This, of course, is not an issue if you have just a few cameras. But, when you get into a situation like that of the city of London, which has over 400,000 cameras, you run into a huge issue: an overload on the main server and network that can, and probably will, break things even if you have the biggest pipe in the world.
37298498 – cloud computing concept, global computer network
In a hypothetical case similar to London’s, ideally you’d want to filter the information that is being sent to the network’s main servers. This, as you might already know, would require those data-generating devices to be capable of making decisions—of identifying which information is relevant and which is irrelevant.
There are many more applications of Edge Computing other than surveillance cameras. However, surveillance cameras are one of the biggest use cases for Edge Computing because they require a lot of bandwidth at peak operation. Also, there are several things that can be done with data collected from surveillance cameras, that wasn’t realistically possible even within the past few years when most cameras were still “dumb” devices.
For example, you could be a British government agency hunting down a criminal in London. So, you scan a picture of the criminal and create a biometric map of his/her face. Then, you can configure your security cameras to only report back to the main server once they detect someone that matches that biometric map (obviously this would require facial recognition software).
This, of course, is only possible if security cameras sitting on the edge of the network are capable of making decisions. Otherwise, the network’s main servers would go nuts processing data, most of which would be garbage, from over 400,000 cameras streaming 24/7!
The overload problem wasn’t an easy one to solve, but since sensors and software are now incorporated into “edge” devices (these are devices living on the edge of the network), network engineers can now configure these devices to only send relevant information to the network’s main servers, or in other words, these devices can now make decisions without having to rely on anything other than their own computational power. Thus, making things better for everybody.
Edge Computing not only helps to reduce network loads, but also increases efficiency, functionality of devices and speed of information processing since data doesn’t have to travel far to be analyzed. But it’s not all good news, there are problems that arise from the incorporation of Edge Computing, as well as smaller issues that can also affect business’ operations. Let’s take a look at the pros and the cons of Edge Computing:
Pros to Edge Computing
Reduces network load
Let’s go back to the London security camera example for a second. Pretend you’re running security for the entire city, and that you have decided to upgrade all of your cameras to stream in 4k definition to make better use of recordings…
A 4k definition stream consumes something around 25Mbps. So, if all of these cameras were only capable of sending information back, your network would have to smoothly process the information coming from over 400,000 cameras every second. In that case, you’d have to wait for technology to catch up to your needs, my friend!
Edge Computing makes these type of crazy but quite common scenarios possible. With Edge Computing, networks can scale to meet the demands of the IoT world without having to worry about overconsumption of resources on the network and servers or wasting resources on processing irrelevant data.
Network engineers can program Edge Computing devices to perform several different kinds of functions. I’ve covered the example of filtering data before sending it over the network, but Edge Computing devices are capable of doing many more things. Since they have their own software and can process their own data, they can be configured to handle edge data in ways that have not yet been imagined.
With the new capabilities presented by leveraging data at the edges, networks will inevitably have more functionality and will hopefully become increasingly more efficient as well.
Efficiency (real time access to analytics)
Another benefit of Edge Computing is that it enables real time data analysis performed on the spot—which is a big deal for businesses.
For example, if you’re part of a manufacturing business with several manufacturing plants, as a manager of one of those plants you could greatly benefit from analyzing your plant’s data as it is being recorded, rather than having to wait for that data to go to a central server to be analyzed and then be sent back to your plant.
Such speed translates into immediate action, which ultimately results in cost reductions and/or revenue increases (the main things businesses try to do).
Imagine being a manager of a manufacturing plant having an issue with your production process and you have to wait for your data to be analyzed by the company’s main server. Whereas with Edge Computing, you could find what that issue is virtually immediately. Thus, saving significant resources!
Cons to Edge Computing
The main negative aspect about Edge Computing is security. Before edge devices became “intelligent” they weren’t a vulnerable part of the network—they were just “dumb” devices performing very limited tasks.
However, by adding advanced software and hardware into these devices and also empowering them analyze their data locally, all of these devices become more vulnerable to malicious attacks.
With networked devices analyzing their own data on the spot, it is now even more likely for any of them to be infected with malware and begin distributing it across your network.
The security problem isn’t going away anytime soon. As of right now, and for several years to come, building full-blown security into every endpoint of a network would not be feasible, which makes conventional security virtually impossible to accomplish. Therefore, networks utilizing Edge Computing will have to rely more on security through the network itself. [More on that topic here]
Increased risk of human error
With several new intelligent devices connected to a network, configuring all of these devices correctly becomes a challenge. Network engineers can now go into each one of those little devices and perform several configurations, which makes it easy to make a mistake.
Going back to the security camera example, imagine something as simple as a setting up one of your cameras to record during the day rather than at night. This, of course, isn’t inherently caused by Edge Computing, but by human error. However, the possibility for these potential misconfigurations can certainly be enhanced with the incorporation of more Edge Computing devices.
Cost of functionality
The advancement of Edge Computing devices makes it possible for new businesses to modify their business model. Nowadays, for example, instead of selling you a device, they sell you the device plus the ability to use certain key functionalities—only if you pay extra, of course. It is very easy to pretty much double the cost of the hardware just by adding additional functionalities.
This is something to watch out for if you’re deploying Edge Computing into your network because, as you know, the IT industry revolves around money so you must keep your priorities in check. Make sure that you read the fine print every time you purchase new technology so that you’re only paying for what you’re truly going to use, and aren’t being charged for functionalities that you don’t need.
Why should you care?
Edge Computing is enabling the Internet of Things to take over the world. According to Cisco Systems, by 2020 there will be tens of billions of devices connected to the Internet. Even if all of these devices were to send text files all day every day, we’d still need Edge Computing technologies to avoid big issues.
This means that every single network in the near future will use Edge Computing to operate. Hence, if you start digging into the weeds of Edge Computing, you’d be at the forefront of the industry—at least until the next big change comes by.
Nonetheless, you must watch out for costs, since they can skyrocket in the blink of an eye; you must go the extra mile to protect the integrity of your data, since Edge Computing could make it quite vulnerable to malicious attacks; and, you must think about better methods for configuration management and orchestration of network devices as more and more computing and intelligence is deployed to the edge of the network!
According to the CEO of Cisco Systems, soon there will be one million new connections every hour. So unless something drastic happens, we will live in a world where almost everything will be connected to a network. Gadgets ranging from an Apple Watch to a toothbrush will be generating data and communicating with a server.
This unprecedented growth of IoT connections, which is being caused by the incorporation of sensors and software in everyday tools, will eventually make people’s lives easier and prevent future problems.
However, the alarming growth of devices being connected to the internet is generating certain problems with the infrastructure on which information runs on.
There are two main problems that come with the boom of IoT, and these are scalability and security.
Scalability because the current way that devices communicate with servers isn’t optimal if there are too many devices communicating with the same server—specially if you require advanced functionality on those devices; and, security because the proposed solution for the scalability problem, can make any network vulnerable to malicious attacks.
Back in the day, when a device was connected to a network—let’s say a security camera, there was no “intelligence” incorporated into that device. Meaning that the camera could basically be either on or off, and, while it was on, it would be sending information to the server the entire time.
If you had a few cameras talking to a server this wouldn’t be an issue because the server could process all of the information. But, when you add hundreds or thousands of cameras that are doing the same thing, you start to have major issues even if you have a huge pipe.
The scalability problem is solved by Edge Computing. This is the idea that the “intelligence” of a network can be pushed to the “edge” of the network. Edge computing takes advantage of intelligent devices that are connected to a network by configuring them in such a way that they only send relevant information to the server.
In the security camera example, since old security cameras had no way to determine whether they were sending relevant information or not, they would simply send anything that was recorded as long as they were turned on.
However, with the advancement of IoT devices, these cameras got smarter, and now, using sensors and software incorporated in them, they are able to send, for example, only video where there is motion.
There are several benefits to Edge Computing. Among them, of course, is the capability of significantly reducing the load on the network devices and primary servers, allowing the network to operate more efficiently.
Another not-so-obvious benefit is the increased functionality of common devices. Now that technology has made simple everyday devices “intelligent,” these devices can be configured to perform advanced tasks and make decisions “on the spot” without having to completely rely on centralized servers.
Edge Computing can solve the scalability issue, and those who take advantage of this trend will become very valuable as IoT connections grow worldwide. However, solving the scalability issue is only half the battle, which leads us to our next topic:
Solving The Security Issue
Everything is butterflies and roses until you realize that with every IoT connection you are adding one more “hackable” device to your network.
Consider the fact that (probably) it is never going to be financially feasible to build robust security into every single IoT endpoint and you’ll realize that, at least for now, true security for IoT networks will have to come from somewhere else in the stack—that place is the network itself.
The main challenge with IoT security is financial resources. Since it is not viable to build the same level of security into a sensor that helps open a door than into an endpoint where sensitive data is being stored, many devices will remain vulnerable to attacks.
With this in mind, visibility into the network becomes crucial. However, even being able to look at the network holistically, knowing whether a device is distributing malware or stealing credentials could be very challenging if you have thousands and thousands of devices.
Add to this the fact that most traffic is now encrypted, which makes securing the network even more difficult. According to Cisco, over 80% of the world’s traffic will be encrypted in just a couple of years. You’d think that this increases security levels. However, Cisco also estimates that 70% of attacks will use encryption as well.
So, we find ourselves at a point where too many devices are vulnerable to attacks that can, and probably will be, camouflaged as encrypted traffic. Hence, detecting which devices have been compromised will be a very challenging task—especially since dealing with software updates in so many devices could easily compromise security as well.
The solution to this is something called Encrypted Traffic Analytics (ETA). ETA allows a network to analyze traffic metadata and, with a very high degree of certainty, determine whether traffic is malicious or not.
ETA uses machine learning to build a history of how encrypted benign traffic looks and behaves. This cutting edge technology allows the network to develop a frame of reference that is later used to infer the probability of traffic being “infected” by comparing it to previous data sets.
ETA, of course, threads the needle between privacy and security because the technology could be used for many other things other than detecting malicious traffic. But, perhaps true privacy is something that will inevitably have to go away as IoT connections grow all over the world.
Here’s a great explanation of ETA from Cisco Systems Sr. Technical Security Lead:
The Bottom Line
IoT connections will only continue to grow over the next few years. With technology becoming cheaper and, as a result, more accessible to everyone in the world, it will be just a matter of time before virtually everything is connected to the internet.
This will prove to be beneficial for the world. Imagine how many tragedies could be prevented or how much easier life could be for children and elders. However, in order for IoT to truly takeover, networking technology must advance to the point where this outrageous IoT growth is even possible.
Edge Computing proposes a viable solution to the scalability problem. With the implementation of Edge Computing we will have access to data that has never been leveraged before, which would lead to the simplification of human life.
But this will come at a cost—and that cost could be privacy. With so many intelligent devices that are vulnerable to encrypted attacks, it would be up to solutions such as ETA to help prevent the misuse of critical information.
Unless you’re the NSA, there is really no easy way to know whether a device in a network of hundreds of thousands of devices is distributing malware, or if due to a recent update a device has been made vulnerable to malicious attacks. It is now up to the network itself to prevent these attacks by using cutting edge technology such as ETA.
Cisco Systems is tackling this issue directly. They have announced their Network Intuitive, which uses ETA and Predictive Maintenance to “flag” potential malicious traffic and fix loopholes before they are taken advantage of by hackers.
But even with Cisco’s efforts, the imminent issues we are faced with due to exponential IoT growth are far from being solved. Hence, any IT professional who stays on top of the evolution of IoT, Edge Computing or ETA will become an invaluable asset to any organization.
According to research from the University of Massachusetts most of us are liars. They found that 60% of people lie 2-3 times every 10 minutes of conversation. Sometimes we don’t even realize it, but we’re constantly doing it.
Whether it is trying to impress people by blowing something out of proportion, or even telling a homeless person that we “have no change” just to avoid interacting with him or her, we can’t deny that we lie a lot.
You might not know this but there’s a whole science behind lying, and it goes well beyond words.
The study of body language, particularly micro expressions, was pioneered by Dr. Paul Ekman, a psychologist who gained a reputation for being “the best human lie detector in the world.”
By using the 7 main emotions as a baseline, Dr. Ekman found that it’s possible to derive a huge number of micro expressions that can indicate what a person is feeling, and as a result, if the person is lying.
Have you seen any movies where detectives interrogate the bad guys in a room?
If you’ve seen a show called Lie To Me you’ll know exactly what I’m talking about…
Detectives first ask a series of straightforward questions to which they know they will get a truthful answer.
This allows them to get an idea of how the suspect reacts when telling the truth. Once they do this, they jump to the interesting questions and are able to infer if a person is lying based on his/her reactions.
Just like detectives use Dr. Ekman’s research to get truthful answers, Cisco’s intuitive network uses intent-based analytics and machine learning to track encrypted traffic’s “body language” and infer if there is malware in it.
Cisco’s Network Intuitive does this by using a pretty interesting concept called ETA, which stands for encrypted traffic analytics. It identifies patterns in encrypted traffic, most of which is non-malicious, and builds a reference that enables it to point out if something seems odd.
Introducing Network Intuitive
It’s no surprise that the tech industry changes all the time. But, the introduction of the Network Intuitive, and how it looks to simplify some of the major challenges of today, is a clear sign that the industry is about to change drastically.
In the words of Cisco’s CEO, Chuck Robbins, “We’ve reached an inflection point.” Billions of devices are connected to the internet. Security threats are a major and growing concern and environments are becoming too complex.
Cisco is promising to solve all of these problems by building a platform that allows any organization to scale, secure their information and simplify their environments with their futuristic network; a platform “that meets companies where they are.”
Everything sounds fascinating. But you know what else people don’t seem to talk about and is inevitably due for a major change as well?
The IT profession itself.
I didn’t hear Chuck Robbins say anything about entry/associate level technical training during Cisco Live…
If you’re in the early years of your career or looking to get started, and you keep up with the latest trends (you should!), you have a huge opportunity in front of you.
Dozens of technologies are being integrated into networks, and tons of devices generating endless amounts of data are being connected to them. Yet, entry-level certification programs still force people to specialize in one area of IT from the very start.
Nowadays, this is like asking pre-med students to pick a specialty field during their freshman year when they still don’t know what they like!
Why should you be forced to choose between security or data center so early? It makes no sense at all…
Cisco is looking to solve three main issues, according to their CEO:
1. Doing things at scale: this is a response to the outrageous number of devices being connected to the internet. Next year, it is expected to have over a million connections every hour.
Cisco looks to provide enterprises, which are already using over 3.1 billion devices to change their business models, with a network that can heal itself through “predictive maintenance.”
2. Simplifying operations for companies: networks are getting more complex every day. Using programmability and automation, Cisco aims to simplify operations for companies who, in many cases, have failed at organization.
For instance, there are companies that use multiple providers of cloud services and their IT department struggles to keep up with managing all the different service providers. Thus, adding complexity where it is not needed.
3. Securing information: more and more traffic is being encrypted every day. Very soon, more than 80% of traffic will be encrypted and 70% of attacks will use encryption. ETA (encrypted traffic analytics) will be used to counter these attacks.
On the downside, this threads the needle between privacy and security. It will for sure be a fun debate…
Add to these three things the fact that companies are now looking to hire people who understand the stack of networking technologies (read more about it here) and you’ll realize that you must learn as many technologies as possible to be successful in IT.
You’ll still need to specialize down the road, don’t get me wrong. But, in order to stand out from the crowd during the early days of your career and to collaborate in “full stack” IT teams, you’ll need to get a networking foundation that includes as many relevant technologies as possible.
But you won’t hear Cisco talk about it, no not them. They make too much money from their certification programs which, of course, are very famous. But, haven’t even adapted to the company’s plans.
So, until they don’t adjust their training programs to meet the demands of the future, they will remain silent about this topic and keep making a fortune off of the huge brand they’ve built.
They’re already busy building an intelligent network, and training isn’t even their core operation. Like it or not, education is just an additional revenue stream for them. They’re just a giant company that needs to keep growing to make sure their stockholders stay happy.
The network of the future calls for the network engineer of the future. And the network engineer of the future is a full stack network engineer.
A full stack network engineer knows networking, security, cloud and automation. The last one probably being one of the most important skills of the immediate future.
Full stack network engineer are/will be professionals, although specialized in one or two areas, who understand how all the components of modern networks must work together in order to operate smoothly, and who are able to communicate and collaborate across areas of IT.
Thus, helping companies streamline their operations and avoid hiring people to manage each one of the technologies they choose to incorporate in their environments.
IT professionals will need to become full stack network engineers or be rendered obsolete by their colleagues before they can say “let’s wait and see what happens.”
On the other hand, those who take action immediately will be able to redeem the rewards of being action takers, since they will build some of the most coveted skill sets of the near future.
But how can you take advantage of the fact that training programs are still forcing people to focus in one area at a time?
That’s a question that you’ll have to figure out on your own. According to the Bureau of Labor Statistics there will be hundreds of thousands of open networking jobs in the upcoming years and not enough people to fill them.
Despite this, IT training programs and mainstream certifications haven’t done anything about it.
Sadly, all that time and effort invested in getting technical certs ends up being to get past HR. Sure, you’ll get through them and maybe even get the job. But, what happens next?
Are you most likely to stand out from the crowd and make your bosses love you (and maybe even promote you) by showing them your fancy diploma with medieval letters and a shiny frame, or by actually getting shit done?
Ask yourself that question…
Full stack network engineers must understand everything. They don’t have to be the ultimate expert but they must have an underlying foundational knowledge that will allow them to succeed.
This type of engineer is the network engineer of the future. Like it or not, these are the people companies are looking to hire already; even before the announcement of The Network Intuitive was made.
It’s up to you. You can either be among those who jump on trends and are always looking for an edge, or you can wait and witness how your peers, in one or two years, will pass you by like if you were moving backwards.
Don’t lie to yourself. If you’ve read all the way to here, you probably have all the motivation, drive and self-disciple that it takes to be successful in IT (that will never change, at least for now).
So be smart about how you will take advantage of this new trend because it can be the deciding factor on whether it takes you 15 years or 4-5 years to reach a top position paying you a great salary with great benefits.
You can learn more about becoming a full stack network engineer here.
According to the U.S Bureau of Labor Statistics, there were almost 6 million open jobs in America –pretty close to the all-time high. On the surface, this might seem like a good thing. It’s easy to think that since employers are hiring then the economy is growing. Deeper down, nonetheless, such a high number of open jobs is a big red flag. It means that there are not enough qualified people in the country to fill high-skilled positions, many of which are IT jobs. Think about it, with unemployment hovering around 5%, obviously there’s plenty of people to fill low-skilled jobs. So, if these jobs didn’t require highly educated and experienced people there wouldn’t be so many available. Would it?
The lack of highly skilled people is particularly evident in the tech industry, where every year a ton of companies look to hire foreign talent through the H-1B program. This program, which grants a total of 85,000 non-resident work visas annually, most of which go to tech-related fields, has received well over 200 thousand applications for the past few years in a row. Even though there have been some cases in which companies try to take advantage of this visa, such high demand for foreign talent signifies that there is a lack of qualified individuals in the country. According to Code.org, there will be almost 1 million more IT jobs than people who can fill them in a few years. Hence, debating whether or not the number of visas granted by the H-1B program should be reduced or not is irrelevant.
Anyway, hundreds of thousands of IT jobs will be open in the not-so-distant future. A majority of them, to many people’s surprise, will be in the field of networking. People often make the mistake of thinking that more “tech” jobs mean more coding jobs. But, the truth is that there is way more demand for networking skills. Every single business out there, whether it is a small food chain or a Fortune 500 corporation, needs IT people. Add to that the fact that it is estimated that there will be over 50 billion devices connected to the internet very soon, and you’ll realize that demand for networking skills is only moving up. Yet, not much is being done to address the skills gap. ted in IT. But, with only about 10% of companies claiming that higher education is very effective when it comes to training people on the skills needed at their organizations, it’s obvious that this problem goes far beyond funding. On top of everything, the current president is laser-focused on bringing jobs back to America. In fact, it was the whole premise of his campaign and it seems like it will be the whole premise of his administration. But, bringing IT jobs back is not a problem — there are plenty available. The real issue is making sure people have the right skills to live up to the high expectations of companies.
All these IT jobs will remain open unless education aligns better with what companies need: people who can actually get stuff done. However, we all know this isn’t going to happen anytime soon. On one hand, there are higher education institutions with outdated curriculums taught by people who, more often than not, have no valid real world experience. And, on the other hand, entry-level certifications that, even if they are used as a reference by many companies, can’t possibly certify that a person is ready to hit the ground running. After all, these certifications, although some can be very challenging to earn, are based on traditional written testing rather than hands-on skills performance validation.
This won’t change. Taking advantage of the IT industry and all it has to offer is up to each person. Those who are passionate, motivated and constantly looking for new in-demand skills to add to their repertoire will be able to reap the benefits of being some of the most valuable people to organizations. However, making it to the top of the industry can be very challenging if you’re being taught by professors who never made it. The reality is that the real world IT jobs are way different than what IT training schools and entry-level certification institutes are teaching. The worst thing of all is that, even after going through training programs and spending a lot of money, students are thrown into the industry without having a career plan to move up the ranks of IT. This results in many people spending years and years of their lives in unfulfilling and boring positions that are only meant to give people some hands-on experience so they can jumpstart their careers.
If you want to set off on a path to a fulfilling and lucrative career in the IT industry, and don’t want to spend a ton of money and time to reach the top, strongly consider trying to get into this career blueprint program and these tips:
Don’t waste your time and money on a college degree
Universities are way overrated. Back in the day earning a degree might have been a way of breaking into the industry. But, nowadays they waste 4+ years of people’s lives teaching a bunch of things that are not really needed in the industry. And if it wasn’t enough, they do it for an outrageous price. Recently, American student debt surpassed 1.3 trillion dollars! Companies want to onboard the best kind of people to their teams and want to have them stay there, adding value to the company, for many years to come. The best people have a mix of self-motivation, self-discipline, and proactiveness. If you can show a company that you have all of these traits, they could care less if you have a degree from a top school. In fact, even if you had a degree from one of the best schools in the nation but had a bad attitude and lacked motivation, you would either not get hired or lose your job shortly after. You can avoid slowing down your career and getting tens of thousands of dollars in debt simply by realizing that you can get valuable skills many other ways. These are the top-5 IT careers that don’t require a college degree.
Get hands-on experience
The best thing you can possibly have when you’re applying IT jobs is hands-on experience. Like I said above, companies want people who are motivated, disciplined and capable of getting stuff done. Nothing showcases these traits better than someone who went the extra mile to gain new skills and hands-on experience handling real world equipment. Unlike in many other professions like engineering and medicine, people who aspire to have an IT job can gain critical hands-on experience even before breaking into the field. This is a good thing, it allows people to learn the most important skills and practice performing tasks without any risks. However, companies are very aware of this. So, even more than in other industries, employers expect people to have a great deal of hands-on experience even if they are looking to land their first ever IT job.
Earn entry level certs, but make sure you capitalize on them
If you’re struggling to get hands-on experience and want to take a more traditional route, work towards earning entry-level certifications. Even though not having hands-on experience could be a big problem when it comes to landing a full-time job, one or two entry level certs could help you land a good internship where you can learn how to handle real equipment. If you have experience performing real world tasks on real equipment and have one or two technical certifications, landing full-time IT jobs shouldn’t be a problem. This said, make sure you avoid racking up entry-level certifications without taking specific actions that move you closer to your goals. Hiring managers look for people who are proactive and get stuff done. Having too many certifications under your belt and not being able to actually perform real tasks, could mean that you are a too much of a learner and too little of a doer which could backfire.
Have a plan to move up and avoid getting stuck
Many people work hard to get certifications and some experience under their belt just to end up getting stuck in their entry-level roles for way too many years. Entry-level roles are good to build the bulk of your resume and get your feet wet. But beyond that, you should strive to get out of them as fast as you can. A common mistake people make is thinking that the experience and skills they get from troubleshooting will be enough to get promoted to better-paying positions. This rarely works out. If you want to get promoted fast, it is better to focus on gaining new skills. Realistically, anyone could move all the way from the helpdesk to a network engineer position within a year or so. Too many people think this is not possible because it’s not that common. However, that’s only because most people don’t have a solid plan to execute on after breaking into the industry and they end up going around in circles.
The bottom line
There are way too many open IT jobs but not enough people in the country to fill them. This is due to disconnect between the IT training industry and what companies are in need of. Things are not looking good. With IoT just around the corner, the number of open IT jobs will only increase. Yet, not much is being done to address the skills gap issue. The Obama administration launched an initiative called TechHire, which will help train thousands of people in IT. However, this won’t solve the problem. The resources from this initiative will fund people through traditional education institutions, which only about 10% of companies think train people on the skills that are truly needed. On top of that, the Trump administration is looking to bring jobs back to America. This is great for the country in general. But, IT jobs don’t need to be brought back — there are too many already available. The real issue is getting people to acquire the most in-demand skills and training them in such a way that they can hit the ground running. Unfortunately, the education system won’t change overnight. Hence, it is up to each person to take control of their future and look for education alternatives that can give them the exact skills and experience that companies will be looking for tomorrow, not yesterday.
There’s only one training program that can give you all the foundational knowledge you need to be successful (soft and hard skills) and allow you to gain hands-on experience working on real-world equipment. Here’s a link to it