Thursday, October 10, 2024

The Impacts of AI on Cybersecurity

 The Impacts of AI on Cybersecurity

Abraam Ibrahim, 2024 Commonwealth Cyber Initiative Intern

Introduction


In 2024, an estimated 77% of businesses were either using or were exploring the use of Artificial Intelligence (AI) tools in their business processes (National University, 2024). The light-speed pace of AI development and adoption has massive implications across all industries and sectors. Finance experts using AI to assist in trading, doctors using AI for ultrasound measurement, and factory owners employing AI to create smart factories are just a few examples. The ever-advancing field of cybersecurity, already well-known for its rapid acceleration, is among the most impacted by this AI revolution. 


This article explores the anticipated impacts that the rise of AI could have on cybersecurity, and how AI's ability to enhance authentication, automate tasks, and improve threat detection and response is revolutionizing the field while simultaneously introducing new challenges and threats.


The Power of AI in Cybersecurity


AI can streamline and automate a wide variety of security tasks. Authentication, incident response, threat detection, intelligence, and other tasks can be significantly improved or even replaced with AI assistance. By leveraging natural language processing (NLP) and deep learning (DL) capabilities, security experts can make cyber operations more effective and efficient. Similarly, AI integration into security solutions will enhance the overall security posture of organizations by enabling faster and more informed response and defense measures, ultimately reducing the time to detect and mitigate threats.⁤ 

AI in Authentication


Authentication, the process of validating users, has always been an indispensable guard against security breaches. As the thinking goes, if malicious parties can be prevented from ever gaining access to a target (database, account, server, etc), then all damage can be mitigated preemptively. Accordingly, cyber professionals have created increasingly secure login methods to enhance security and user convenience (biometric, MFA), which have now become the industry norm. 


With the advent of AI-enabled machine learning algorithms, security professionals are now employing user behavior analytics (UBA) to detect unusual activity during a sign-in attempt. An algorithm can take various inputs, such as the typing speed, cursor movement, touch input, and biometric data (voice, fingerprints) to determine the validity of an individual request and authenticate legitimate while flagging and reporting suspicious sign-in attempts.


Cybersecurity Automation


AI’s ability to tirelessly sift through massive amounts of data has profound implications for the industry. AI will enable professionals to focus their time on the most important areas while providing cost-saving benefits due to increased efficiency.

Friday, August 2, 2024

Pause before Graduate School?

I went to graduate school right after college, and I regret it. I wish I had waited and joined the workforce first.

Story by insider@insider.com (Rashi Goel)

  • Right after I graduated from college, I went straight into grad school.
  • I wish I had waited to go to grad school and instead followed my other interests.
  • I want young people to know that they don't have to rush into getting their master's degree.
The final exams for my bachelor's degree took place in March 2005, and by June, I was enrolled in my master's program.  
The two-month break didn't feel like one, as it was a whirlwind of applications and entrance exams. I would have loved to travel, write, and contemplate my future career path, but instead, I plunged straight into further studies. After pursuing a bachelor's in business administration with a major in marketing, I felt compelled to continue studying marketing, resisting exploring other options.

I should have known an MBA was not the right choice. Reflecting on my childhood and teenage years, I realized that my academic pursuits overshadowed my growing interest in nature and the outdoors. I was passionate about environmental systems, tree-planting drives, camping, and painting nature scenes. This inclination starkly contrasted with the desk job awaiting me in the corporate world.

Had I taken some time off after college — a gap year perhaps — I would have prioritized travel, indulged in creative writing, and done an internship. Those experiences would have better prepared me for life beyond the university bubble. My MBA

Wednesday, May 1, 2024

High School Internships and Senior Experience Externships this summer

The work-based learning staff at Virginia Tech's Thinkabit Labs in the Washington, D.C. Area is welcoming students from nearby schools in Alexandria, Arlington, DC Public Schools, Fairfax, Falls Church, Loudoun, Manassas, Manassas Park, Prince George's County (MD), and Prince William County.  

We welcome students interested in any career path, but we are particularly oriented to support internships in Computer Science, Engineering (any), Environmental Science, Health and Medical Science, Natural Resources, Physical Computing, Public Policy / Government, and Social Sciences. 

While in-person and hybrid internships are encouraged, virtual internships may be limited due to the burdens of remote internships on staff. All internships should be 280 hours or more to meet the requirements of a high-quality work-based learning experience as defined by VDOE.  We strongly encourage in-person participation Tuesdays, Wednesdays and Thursdays.

Senior Experience externships (40 hours) in May and June are available for any high school senior.

Send a message expressing your interest to Thinkabit@VT.edu.  


Monday, April 8, 2024

Register today - Invent Virginia / Invent DC regional event - April 13, 12 noon to 3pm


 Last-minute registration is possible, but please try to register online in advance to minimize delays when you arrive.

The form will require only a few minutes.  Those under 13 will require a parent to complete the form.

https://forms.gle/ojA1cyxNB8Pvchv79   


If you're unsure about a past registration, it won't hurt to register again.

Monday, March 11, 2024

Invent Virginia Regional Expo - April 13, 12pm - 3pm

Virginia Tech launches Invent Virginia and Invention D.C. K-12 Expos and Competitions Educators and advocates for STEM education and Entrepreneurship can begin spreading the word about upcoming opportunities for young inventors and innovators to showcase their ideas.

Virginia Tech is sponsored by RTX and Amazon in the Community to promote invention and innovation demonstrating the importance of design processes and communication skills. Invent Virginia provides curricula, support for local, regional, and state Expos and competitions: 
  • April 13 Northern Virginia and DC Regional Expo;
  • April 20 Coastal Virginia Expo in Norfolk; and 
  • April 21 virtual Statewide competition. 
The statewide event will nominate up to seven for the national Invention Convention Worldwide at the Henry Ford Museum in Dearborn, Michigan in early June (5-7). Projects completed through Technology and Engineering courses, TSA, PLTW, Science and Engineering Fairs, Odyssey of the 

Wednesday, February 28, 2024

Research Intern Blog Post: The Risks of AI in the Workplace - Divine Doamekpor

 VT THINK-A-BIT LAB

The Risks of AI in the Workplace:

What causes the most concern with AI?

Divine Doamekpor, Thinkabit Lab Research Intern


Source: Primer.ai

Introduction

Artificial Intelligence (AI) has become a vital part of modern society, offering numerous benefits in multiple fields (e.g., efficiencies, information aggregation, supporting research). However, the rapid advancement and widespread adoption of AI technologies also raise significant concerns regarding their potential risks. This report explores such concerns commonly discussed in the media, including the overreliance on AI, inherent biases in AI systems, and the impact on employment leading to job layoffs.


Over-Reliance on AI:

One of the primary concerns associated with AI is the growing dependence on these systems across diverse fields such as healthcare, finance, education, and criminal justice, where AI is increasingly relied upon to make critical decisions. This reliance is emphasized by examples like the fictional example court case, State v. Alex Turner, where I asked an AI to create a court case and a way to prove innocence, and Michael Cohen's submission of AI-generated fake legal cases to a court. These examples reveal a concerning trend: the readiness to accept AI-generated content without question, underscoring the risk of diminishing human diligence and oversight. As AI tools become more sophisticated and capable of creating convincingly possible accounts of incidents or disputes, society faces ethical and legal challenges. This reliance raises concerns about the diminishing level of human effort and the potential for AI to replace critical thinking and accountability but also reveals our vulnerability in forgoing human verification and ethical responsibility in decision-making processes.


Bias in AI Systems:

AI systems are not immune to biases; they can inadvertently perpetuate and amplify existing societal biases. The data used to train these models often reflect unintentional and/or hidden historical disparities, resulting in biased predictions and decisions. This bias can appear in various forms, such as gender, racial, or socioeconomic bias. For example, an AI system used for hiring might prioritize male candidates over female ones due to the historical data AI was trained on, which may have included more men in specific job roles. According to the Synergia Foundation, “In 2015, Amazon realized that their algorithm used for hiring employees was found to be biased against women.” 

Similarly, AI used in criminal sentencing could impose harsher sentences on minority groups if the training data mirrors past judicial biases. The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, which judges use to determine whether to hold prisoners in custody or release them on pending trial, was found to be biased against African Americans, according to a ProPublica analysis.

The consequences of biased AI can profoundly affect individuals' lives in hiring processes, criminal sentencing, and access to essential services, underscoring the importance of addressing and mitigating bias in AI algorithms to ensure fair and equitable outcomes.

Impact on Employment and Job Layoffs:

Integrating AI technologies in the workforce has undoubtedly increased efficiency and productivity. However, this integration comes with significant implications for human employment. Automation and AI-driven systems can potentially replace specific job roles, leading to job layoffs and economic disruption. Industries such as manufacturing, customer service, and transportation are already witnessing the transformative impact of AI, which raises concerns about the displacement of human workers. In this context, the need for reskilling and upskilling programs becomes paramount to mitigate the negative consequences on the labor market. Echoing the sentiment expressed by Richard Baldwin during a panel at the 2023 World Economic Forum's Growth Summit, "AI won't take your job," it is indeed "somebody using AI that will take your job." This highlights the importance of adapting to the evolving job landscape by prioritizing specific AI-related skills in reskilling and upskilling programs.

Source: brookings.edu


Conclusion:

Recognizing and addressing the associated risks of AI, especially in the workplace, is crucial to ensure a future where technology serves humanity responsibly and offers transformative advancements. The rapid integration of AI in various industries poses significant challenges, including job displacement, privacy concerns, increased bias, automated bias, and the amplification of biases. By focusing on these risks, we can strive for a balance between innovation and ethical considerations, aiming to harness AI's benefits while protecting workers from potential harm. This requires the implementation of robust ethical guidelines, the development of AI technologies that complement human skills rather than replace them, and the creation of policies that support individuals affected by automation. Fostering a workplace culture that values human oversight and ethical AI use can mitigate risks and ensure that AI is a tool for enhancing, rather than undermining, the workforce's integrity and well-being. Through these measures, we can navigate the challenges posed by AI in the workplace, ensuring that technological progress benefits all members of society.


Please share comments below.