There are a lot of things happening in the world of AI. But I feel a lot of people don’t fully understand what’s happening with this technology, the direction it is going in, or why a lot of what you read (not all of it) is not real or is over-dramatized. I may miss a few topics on the first version and if I do I’ll edit this post again later, but I’ll cover the basics for now.
Here is a general breakdown of how “today’s” AI typically works. It was done the same way with almost every major company (even OpenAI.
At its core, artificial intelligence (AI) involves the use of algorithms and computational models to enable machines to perform tasks that would typically require human intelligence. The foundational elements of AI include data, algorithms, and machine learning models. Here is a high-level overview of how AI works:
- Data Collection: AI systems require data to learn from and make decisions. This data can come from a variety of sources, such as text, images, audio, video, sensor readings, and more. Data is collected, preprocessed, and organized to be used in the training process.
- Feature Extraction: Features are specific characteristics or attributes extracted from raw data that are relevant for a specific task. For example, in image recognition, features could include colors, shapes, and patterns. The process of feature extraction involves converting raw data into a structured format that can be fed into machine learning algorithms.
- Algorithm and Model Selection: AI uses algorithms and machine learning models to identify patterns and make predictions. An algorithm is a set of instructions that the AI system follows to achieve a specific outcome. A machine learning model is a mathematical representation of a real-world process or relationship, which is trained on data to make predictions or decisions.
- Training: Machine learning models are trained using labeled data (supervised learning) or unlabeled data (unsupervised learning). In supervised learning, the model learns to map input features to known outputs using labeled examples. In unsupervised learning, the model identifies patterns or relationships in the data without predefined labels. During training, the model adjusts its parameters to minimize errors and improve accuracy.
- Evaluation: After training, the model is evaluated using a separate dataset not used in training (test dataset). This evaluation measures the model’s accuracy, precision, recall, and other metrics, and determines how well the model is likely to perform on new, unseen data.
- Deployment: Once the model is trained and evaluated, it can be deployed to perform its intended task. For example, a trained image recognition model could be deployed to an app to automatically identify and tag objects in photos.
- Inference: Inference is the process of using the trained model to make predictions or decisions based on new input data. For example, given a new image, an image recognition model can infer the objects present in the image.
AI systems often involve feedback loops, where the model’s predictions are used to continuously improve its performance through additional training and fine-tuning. The process of building and using AI systems is iterative, and models are regularly updated and retrained to adapt to new data and changing conditions.
Technology is advancing toward a single goal of achieving AGI. That is ultimately what OpenAI (who is leading the market in leaps and bounds) is trying to achieve.
The achievement of Artificial General Intelligence (AGI), also known as “strong AI,” would require a machine or AI system to possess the ability to understand, learn, and apply knowledge across a wide range of tasks, much like human intelligence. AGI would have the capability to autonomously transfer knowledge from one domain to another, reason, and solve complex problems with little to no human intervention.
While current AI models are highly specialized and excel in specific narrow domains (known as “narrow AI” or “weak AI”), achieving AGI poses several significant challenges that are not yet fully addressed by existing AI models and methods. The development of AGI may require fundamentally different approaches or breakthroughs that go beyond the current state of AI research and technology.
Some of the potential differences or additional requirements for AGI include:
- Generalization and Transfer Learning: AGI would need the ability to generalize knowledge and skills across various tasks and domains, unlike narrow AI models that are typically limited to specific tasks. Transfer learning, where knowledge from one domain is applied to another, would be a key characteristic of AGI.
- Understanding and Reasoning: AGI would require higher-level cognitive abilities, such as understanding natural language, context, causality, and abstract concepts. It would need the capacity to reason, plan, and make decisions based on incomplete or ambiguous information.
- Autonomy and Adaptability: AGI would need to operate autonomously, adapt to new and dynamic environments, and learn from experience. It would need to be capable of self-directed learning and goal-setting.
- Common Sense and Intuition: AGI would require common sense reasoning, the ability to make reasonable inferences about the world, and the capacity for intuitive understanding, even when faced with limited or noisy data.
- Integration of Modalities: AGI would need to integrate and process information from multiple modalities, such as vision, auditory, and language, to form a cohesive understanding of the world.
- Ethical and Moral Considerations: AGI would need to navigate complex ethical and moral dilemmas, which would require addressing questions related to values, ethics, and decision-making in alignment with human values.
The development of AGI is an area of active research and exploration, and there is no consensus on when or how AGI might be achieved. It is an open question whether advancements in current AI methods will eventually lead to AGI, or whether entirely new approaches and conceptual breakthroughs will be necessary to reach.
From my viewpoint I find it hard to believe that our current approach (teaching them based off large amounts of data) will be possible to achieve AGI. That would be a situation where we can assume something will be able to be “alive” just because it knows enough so that it does. Knowledge is the course of what makes things alive. I think that is where the research comes in. The more time that goes on we develop new (sometimes blackbox) technologies that’ll eventually lead up to a system that can be self-sustaining, self-learning, autonomous, able to make decisions, and eventually maybe even feel some form of emotion.
Another thing I’ve noticed is that a lot of news outlets and everyone in general doesn’t fully understand what we are heading into.
AI will take some jobs, replace others, and change others. But overall it is the same as any other technology. If people “use it” to expand, learn and grow then they’ll grow into new jobs or they’ll be able to ride out job modifications that come out. I think it’s more of a lot of the jobs will have two diferent sets of people. AI enabled, and not. Lawyers for example that choose to utilize AI to its advantage as a “tool” will have a clear advantage over people that don’t in some situations. Same with writers, artists and everyone else. It is a tool. Same as the internet. And yes it might phase out certain job, but as that happens it’ll open up access for other jobs and help restructure the economy and the work force all at the same time.
Two aspects of the news concern me.
AI was built bsaed of a large volume of data. That includes fictional data. They are built to also be able to write fictional stories. ((I’ll finish the past in a few days or so))
Part 1 can be found here.
Part 2 can be found here.