Category: Artificial Intelligence


Marble portrait heads of four philosophers in the British Museum. From foreground: Socrates, Antisthenes, Chrysippos, Epicurus.

🤖 🏛️ Have you ever wondered about the connection between AI and Ancient Greek Philosophy?

🧔 📜 The ancient Greek philosophers, such as Aristotle, Plato, Socrates, Democritus, Epicurus and Heraclitus explored the nature of intelligence and consciousness thousands of years ago, and their ideas are still relevant today in the age of AI.

🧠 📚 Aristotle believed that there are different levels of intelligence, ranging from inanimate objects to human beings, with each level having a distinct form of intelligence. In the context of AI, this idea raises questions about the nature of machine intelligence and where it falls in the spectrum of intelligence. Meanwhile, Plato believed that knowledge is innate and can be discovered through reason and contemplation. This view has implications for AI, as it suggests that a machine could potentially have access to all knowledge, but it may not necessarily understand it in the same way that a human would.

💭 💡 Plato also believed in the concept of Platonic forms, which are abstract concepts or objects that exist independently of physical experience. In the context of machine learning, models can be thought of as trying to learn these Platonic forms from experience, such as recognizing patterns and relationships in data.

⚛️ 💫 Democritus is known for his work on atoms and his belief that everything in the world can be reduced to basic building blocks. In the context of AI, this idea of reducing complex systems to their fundamental components has inspired the development of bottom-up approaches, such as deep learning, reinforcement learning and evolutionary algorithms.

🔥 🌊 Heraclitus, emphasized the idea that “the only constant is change”. This philosophy can provide valuable insights into the field of machine learning and the challenges of building models that can effectively handle non-stationary data. It highlights the importance of designing algorithms that are flexible and can adapt to changing environments, rather than relying on fixed models that may become outdated over time.

🔁 🏆 What about reinforcement learning? Aristotle, believed in the power of habituation and reinforcement to shape behavior. According to him, repeated exposure to virtuous actions can lead to the formation of good habits, which in turn leads to virtuous behavior becoming second nature. This idea is similar to the concept of reinforcement learning in AI, where an agent learns to make better decisions through repeated exposure to rewards or punishments for its actions. The similarities between Aristotle’s ideas and reinforcement learning show that the fundamental principles of reinforcement and habituation have been recognized and explored for thousands of years (psychological theories) and are still relevant today in discussions of both human behavior and AI.

🕊️ 💛 The Greek concept of the soul was also tied to intelligence and consciousness. Some philosophers believed that the soul is what gives a person their unique identity and allows them to think, feel, and make decisions. In the context of AI, this raises questions about whether a machine can truly have a soul and be conscious in the same way that a human is.

💀 💔 Another Greek philosopher who explored the nature of intelligence and consciousness is Epicurus. He believed that the mind and soul were made up of atoms, just like the rest of the physical world. He also believed that the mind and soul were mortal and would cease to exist after death. In the context of AI, Epicurus’ ideas raise questions about the nature of machine consciousness and whether it is possible for machines to have a kind of “mind” or “soul” that is distinct from their physical components. It also raises questions about the possibility of creating machines that are mortal or that can “die” in some sense. Epicurus’ ideas about the nature of the mind and soul are still debated today, and their relevance to the field of AI is an ongoing topic of discussion.

👁️ 🗣️ With the advancements in computer vision, natural language understanding, and deep learning in general, it’s more important than ever to consider these philosophical questions. For example, deep neural networks can analyze vast amounts of data and generate human-like responses, but do they truly understand the meaning behind the words they generate? This is where the debate between connectionist AI and symbolic AI comes in.

🧠 🕸️ Connectionist AI approaches to artificial intelligence are based on the idea that intelligence arises from the interactions of simple, interconnected processing units. On the other hand, Symbolic AI approaches are based on the idea that intelligence arises from the manipulation of symbols and rules. In the context of ancient Greek philosophy, connectionist AI could be seen as aligning more with the idea of knowledge being discovered through experience and observation, while symbolic AI aligns more with the idea of knowledge being innate and discovered through reason and contemplation.

🗣️ 💬 Humans use natural language, which includes speech and written text, to communicate with the world. However, natural language is not structured and can be unpredictable because it emerges naturally rather than being designed. If it was designed, natural language processing would have been solved a long time ago. Nowadays, deep neural language models are used to learn language from text by predicting the next token in a sequence. However, natural languages are continually evolving and changing, which makes them a moving target.

🧠 ❤️ Plato expressed a negative view towards human languages because he believed they are unable to fully capture the breadth and depth of a person’s thoughts and emotions. He asserted that genuine human communication, such as that which is conveyed through body language, eye contact, speech, and physical touch, is necessary for the progression of the human mind and the creation of new ideas. In today’s digital age, we are observing a decline in face-to-face human interaction, underscoring the importance of acknowledging the limitations of language. Despite the challenges inherent in natural language, it is crucial that we persist in developing tools to better comprehend and employ it. In addition to embracing technological advancements, it is essential that we strive to maintain meaningful human connections to preserve the richness of the human experience.

🏛️ 🤖 Have you ever heard of Talos? It is the first robot in Greek mythology! According to legend, Talos was a bronze automaton created by the legendary inventor Hephaestus to protect the island of Crete. He was said to be invulnerable and possessed immense strength, making him a formidable guardian. This idea of a powerful, indestructible automaton has been a recurring theme in science fiction and continues to inspire new developments in the field of AI and robotics.

⚖️ 🌍 It’s fascinating to consider how the myth of Talos reflects the enduring human fascination with creating machines that can surpass our own capabilities. The notion of a powerful automaton created to guard a specific territory mirrors modern ideas around autonomous systems that are designed to perform specific tasks and operate independently. However, while Talos was depicted as a purely mechanical creation, contemporary AI and robotics are built upon complex algorithms and data sets that allow machines to learn, adapt, and improve over time. By embracing the potential of these technologies, we have the opportunity to develop new solutions to some of the most pressing challenges facing our world today, from climate change to healthcare. As we continue to push the boundaries of what machines are capable of, it’s important to consider the ethical implications of creating truly intelligent and conscious entities. Some believe that the development of advanced AI could lead to a future where machines are not only equal to humans, but even superior to us. Others argue that such a scenario is unlikely, as true consciousness may only be achievable through biological processes that cannot be replicated in machines. As we navigate these complex questions, it’s crucial that we prioritize ethical considerations and work to ensure that the development of AI and robotics benefits humanity as a whole.

🙏🏽🤝 When it comes to ethics and morality, Aristotle is one of the most influential figures in history. His emphasis on the importance of human virtues and the pursuit of a “good life” has been a guiding principle for philosophers, theologians, and thinkers across generations. Aristotle’s teachings offer valuable insights into how we should approach the ethical implications of creating intelligent agents. One crucial aspect of this is ensuring that we treat these agents with the same respect and dignity that we would afford to any sentient being. This means avoiding the temptation to view AI as mere tools or objects to be used for our own purposes, and recognizing their autonomy as intelligent entities. Another important consideration is the issue of bias and discrimination, which can be inadvertently built into AI systems through the data and algorithms used to create them. Aristotle’s emphasis on justice and fairness can serve as a guide to ensure that we address these issues and ensure that the development of AI is done in a way that is equitable and beneficial for all of humanity.

🧠 🔬 Ending this article, it would be remiss not to mention Socrates, widely considered to be the wisest of all Greek philosophers. Not because of his vast knowledge, but because he recognized the limits of his own knowledge. “I know one thing, and that is that I know nothing,” Socrates famously stated. This concert is particularly relevant to the field of AI, as we continue to push the boundaries of what machines can do and be. As we develop increasingly complex and intelligent AI, the question arises whether it will be possible to prove the existence of self-awareness in these machines.

💬 These are just a few examples of how the ideas of ancient Greek philosophers and mythology can inform our understanding of AI. I hope these thoughts spark further conversation and contemplation on this fascinating topic.

🤔 What do you think about the potential for machines to truly be intelligent and conscious?

I am so proud that Crowdspeak‘s alpha demo is up and running!

On Crowdspeak you will be able to make your own crowd-questions. For this demo, there is just a fixed example question “What are the biggest challenges of working from home?”.

Visit Crowdspeak and type your own response! Crowdspeak will then take your response into account, and a new collective response will be generated in just a few minutes.

This collective response is a machine generated text made to express the majority of the individual responses given by the people!

Don’t forget to give us your feedback and to subscribe to the Crowdspeak newsletter for more!

I am so proud that our financial news summarization model is within 4% of all the 8026 models (in terms of average monthly downloads) in Hugging Face! I would like to thank my colleagues Grigorios TsoumakasTatiana Passali and Alex Gidiotis!

Hello my AI friends!

Today, I would like to share with you skrobot!

skrobot is an open-source Python module I have created at Medoid AI for automating Machine Learning (ML) tasks. It is built on top of scikit-learn framework and follows object-oriented design (OOD) for ML task creation, execution, and reproducibility. Multiple ML tasks can be combined together to implement an experiment. It also provides seamless tracking and logging of experiments (e.g. saving experiments’ parameters).

It can help Data Scientists and Machine Learning Engineers:

  • to keep track of modelling experiments / tasks
  • to automate the repetitive (and boring) stuff when designing ML modelling pipelines
  • to spend more time on the things that truly matter when solving a problem

The current release of skrobot (1.0.13) supports the following ML tasks for binary classification problems:

  • Automated feature engineering / synthesis
  • Feature selection
  • Model selection (i.e. hyperparameters search)
  • Model evaluation
  • Model training
  • Model prediction

For more information you can check out the online documentation!

Lastly, many thanks to all contributors who helped to extend and support skrobot.

Stay safe!

Hello my friends!

Today, I would like to share with you Time Series Segmentation & Changepoint Detection!

It is an open-source R/Shiny app for time-series segmentation and changepoint detection tasks. The app acts as a front-end for R packages such as the magnificent changepoint package.

I had the pleasure of working together with wonderful colleagues at Medoid AI to develop this application. My role in this small project was more organizational and consulting.

For more information on the parameters and algorithms currently included in the app please read the following paper by Killick et al. For an overview of changepoint packages that may be included in the future please see the following page.

Stay safe!

By passing millions of ImageNet images through InceptionV1 (state-of-the-art deep convolutional neural network) we can extract the image patches that make specific neurons from various convolutional layers to activate mostly.

By projecting the image patches to 2D using UMAP we can see what the neural network “sees” at the various layers.

This is a great way for explaining how a computer vision model makes its classification decision.

However, the following part of the article was the reason for my post:

“….There is another phenomenon worth noting: not only are concepts being refined as you move from layer to layer, but new concepts seem to be appearing out of combinations of old ones….”

This is how a world of complexity works.

We know that deep neural networks perform hierarchical feature learning and combine simpler features to learn more complex ones. This is one of the reasons why we use deep learning for audio, visual and textual data.

Deep learning can decompose the complexity of data!

Have you ever asked why we randomly initialize the weights of a neural network?

After you read this post you will know why!

When the weights of the neurons in a neural network’s layer are initialized to the same value then all neurons of the layer produce the same output in the forward propagation.

Furthermore, when doing backpropagation the gradients of the loss w.r.t to the weights of the layer are also the same values. So, when training happens with gradient descent the weights change in the same way.

Lastly, when gradient descent converges, the weight matrix of the layer contains the same values for all neurons and thus the neurons have learned the same thing.

To break this symmetry and allow the neurons of the layer (and in general in all layers) to learn new and different features we randomly initialize the weights!

Despite the promise of Feature Learning in Deep Learning -where dense, low-dimensional and compressed representations can be learned automatically from high-dimensional raw data- usually Feature Engineering is the most important factor for the success of an ML project.

Among ML practitioners the best learning algorithms and models are well-known and most effort is done to transform the data in order to express as much as possible the useful parts that model best the underlying problem.

In other words, the success of an ML project depends mostly on the data representation and not model selection / tuning. When the features are not garbage usually even the simplest algorithms with default hyperparameter values can give good results.

Conditional Language Models are not used only in Text Summarization and Machine Translation. They can be used also for Image Captioning!

Here is a great example from Machine Learning Mastery of how we can connect the Feature Extraction component of a SOTA Computer Vision model (e.g., VGG, ResNet, Inception, Xception, etc) with the input of a Language Model in order to generate the caption of an image.

The whole deep learning architecture can be trained end-to-end. It is a simple encoder-decoder architecture but it can be extended and improved using an attention interface between encoder and decoder, or even using Transformer layers!

Adding attention not only enables the model to attend differently various parts of the input image but also explain its decisions. For each generated word in output caption we can visualize the attended visual part of input image.

In Machine Learning we use minimum-disturbance learning algorithms (e.g., LMS, Backprop) to tune the trainable parameters of topological architectures. Instead of programming a model we expose it to observations and let the algorithm find its optimal parameters. So, in this approach the solution to a task is the set of trainable parameters.

In biology, precocial species are those whose young already possess certain abilities from the moment of birth. There is evidence to show that lizard and snake hatchlings already possess behaviors to escape from predators. Shortly after hatching, ducks are able to swim and eat on their own, and turkeys can recognize predators. These species already have abilities before they are even exposed to the world.

Neuroevolution of augmenting topologies is a research field that aims to simulate exactly this behavior. Specifically, neural network architectures can be found that perform various machine learning tasks without weight training. So, in this approach the solution to a task is the topological architecture.

For more information on this idea check the paper “Weight Agnostic Neural Networks by (Adam Gaier, David Ha).

Natural languages (speech and text) are the way we communicate as species. They help us to express whatever is inside to the outer world.

Natural languages are not designed. They emerge. Thus, they are messy and semi-structured. If they were designed, NLP would be already solved, using context-free grammars and finite automata by linguists 50 years ago.

Today, we are trying to artificially “learn” language from text using state-of-the-art Deep Neural Language Models that behave probabilistically, predicting the next token in a sequence.

Moreover, natural languages are not static. They evolve and change. Different words can be used in different times with different meaning. It is a moving target.

Plato, the Greek philosopher was negative with “languages” -despite the fact the he has written so much- because a language cannot express the fullness of a human mind, of a person. Socrates and many philosophers from the Peripatetic school never wrote texts. The only way they were communicate was by real human communication (body language, eyes, speech, touch). Only with this way, a human mind and heart can evolve and create new worlds.

However, we are living in a century where everything is either digitalised or written and human communication goes to minimum.

Plato

Assume a train/validation/test split and an error metric for evaluating a machine learning model.

In case of high validation/test errors something is not working well and we can try to diagnose if it is a high-bias or high-variance problem.

When both the training and validation errors are high then we have a high-bias model (underfitting). When we have a high validation error and low training error we have a high-variance model (overfitting).

Also, there is a case where both validation and training errors are low but test error is high. This can happen either because we test the model in data from a different world (different data distribution) or we have overfitted to the hyperparameters on validation data, same as when overfitting the models’s parameters on training data.

Machine Learning practitioners always focus on finding in an unbiased way models where validation and training errors are low. When it happens, we say “this a good fit” and then we monitor the model on unseen test data to see how it performs in the real world.

Linear / Logistic regression learning algorithms can be used to learn surfaces to fit non-linear data (either for regression or classification).

It can be done by adding extra polynomial terms (quadratic, cubic, etc) in the dataset using the existing features. The approximated surfaces can be circles, ellipses, parabolas and hyperbolas.

However, (1) it is difficult to approximate very complex non-linear surfaces. Also, when the features are many then (2) it is very computationally expensive to calculate all polynomial terms (eg: for a 100×100 grayscale images we need 100000 quadratic terms).

Usually, due to the limitations (1) and (2) we either perform dimensionality reduction to reduce the number of features (eg: eigenfaces for face recognition) or use more complex non-linear approximators such as Support Vector Machines and Neural Networks.

Some personal notes to all AI practitioners!

In Linear Regression when using the loss function MSE it is always a bowl-shaped convex function and gradient descent can always find the global minima.

In Logistic Regression if we use the MSE then it will not be a convex function because the hypothesis function is non-linear (it uses a sigmoidal activation). Thus, it will be harder for gradient descent to find the global minima. However, if we use the cross-entropy loss it will be convex and gradient descent can easily converge to global minima!

Support Vector Machines have also convex loss function.

We should always use a convex loss function so that gradient descent can converge to the global minima (local optima free).

Neural Networks are very complex non-linear mathematical functions and the loss function most often is non-convex, thus it is usual to stuck in a local minima. However, most optimization problems in Neural Networks are due to long plateau and saddle points rather than local minima. For such problems advanced gradient descent optimization variants were invented (eg: Momentum, Adam, RMSprop).

Happy optimizations!

How much AI and Machine Learning are related to ancient Greek Philosophy? More than you imagine!

In this article you will learn about the never ending debate between Platonic and Aristotelian ideas of the reality of the world.

In my opinion, I agree with Plato saying that the world we experience is just a shadow of the real one. One possible interpretation can come from modern physics with theories such a) multidimensional world (e.g. string theory, theory of relativity) b) quantum physics and c) non-Euclidean geometrical spaces. Things are not as they appear in our brains. We see only a projection of the real world and we are totally living in a matrix limited by our physiology.

However, I agree also with Aristotle saying that the forms of entities reside only in the physical world. So, the “ideal” forms are learned by our brains and do not pre-exist.

The article touches the issue of Universals and Particulars in Philosophy and tries to explain them also in terms of Machine Learning. Machine Learning models learn to separate the universal (signal) from the particulars (noise) from observed data of the world.

Plato (left) and Aristotle (right)

Before AI we were “programming” computers.

In this context a programmer creates a program to solve a specific problem using data structures and algorithms. After that, we can run the program with some input data to produce some output. Let’s call this Traditional Programming.

Nowadays, with AI we are “showing” computers.

In this context a programmer shows to a learning algorithm input and output data for a specific problem to be solved. The program is automatically created by the learning algorithm. After that, we can run the program with some new input data to produce its output. Let’s call this Machine Learning.

Nowadays, computer science departments in universities teach most the Traditional Programming mindset. For boosting an AI-powered society it would be wise to teach equally Machine Learning!

Hi people!

I have created -just for fun- a Node.js web application that provides you a random data science image! Currently, the app is deployed in Heroku but I have limited monthly resources. So, if there is anyone who has premium account and would like to host it for free I would be grateful!

Heroku Application:

http://random-data-science-image.herokuapp.com

Source Code Repository:

http://github.com/efstathios-chatzikyriakidis/random-data-science-image

Images Database Repository:

http://github.com/efstathios-chatzikyriakidis/data-science-images

Happy Hacking!

PAPER

E. Chatzikyriakidis, C. Papaioannidis and I. Pitas, “Adversarial Face De-Identification,” 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 2019, pp. 684-688

PRESENTER

Anastasios Tefas

PDF

https://github.com/efstathios-chatzikyriakidis/adversarial-face-de-identification/blob/master/texts/icip-2019-presentation.pdf

Presentation topic: “Content-based Image Retrieval”

Presenter: Efstathios Chatzikyriakidis

PDF presentation: https://github.com/efstathios-chatzikyriakidis/content-based-image-retrieval/blob/master/presentation.pdf

Source code: https://github.com/efstathios-chatzikyriakidis/content-based-image-retrieval/tree/master/source-code

Presentation topic: “Adversarial Face De-identification”

Presenter: Efstathios Chatzikyriakidis

PDF presentation: https://github.com/efstathios-chatzikyriakidis/adversarial-face-de-identification/blob/master/texts/msc-thesis-presentation.pdf

Experiments (exported files): https://github.com/efstathios-chatzikyriakidis/adversarial-face-de-identification/tree/master/results

Presentation topic: “Adversarial Examples and Generative Adversarial Networks”

Presenter: Efstathios Chatzikyriakidis

Contributor: Christos Papaioannidis

PDF presentation: https://github.com/efstathios-chatzikyriakidis/adversarial-face-de-identification/blob/master/research/MultiDrone%20-%20Adversarial%20Examples%20-%20GANs.pdf

This project aims to the development of an open-source experimental prototype for solving and generating Sudoku puzzles by using only the strength of Genetic Algorithms. This is not a general purpose GA framework but a specific GA implementation for solving and generating Sudoku puzzles. The mechanics of the GA are based on the theoretical scientific paper “Solving and Rating Sudoku Puzzles with Genetic Algorithms” of Timo Mantere and Janne Koljonen. From the first moment, I liked the paper. So, I implemented it in Python. Also, I have add some variations to the algorithm in order to be more efficient. This project can be used in order to solve or generate new NxN Sudoku puzzles with N sub-boxes (e.g. 4×4, 9×9, etc).

Continue reading

The project “PGASystem” (Parallel Genetic Algorithms System) is an under development system based on the client / server architecture and can be used to implement and study of parallel genetic algorithms.

Continue reading

Thoughts on Automatic Software Repairing and Genetic Programming

In the field of Software Engineering enough emphasis is given on the development of methodologies and mechanisms for the design of optimal software systems. Moreover, the quality of a software system can be assessed by carrying out appropriate metrics. Key features under study during the evaluation of a system are reliability, stability, security, portability and usability. The quality of a software system depends mainly on the time spent, expenses made, debugging and testing techniques used etc.

Continue reading

Taxonomy

The Ant Colony System algorithm is an example of an Ant Colony Optimization method from the field of Swarm Intelligence, Metaheuristics and Computational Intelligence. Ant Colony System is an extension to the Ant System algorithm and is related to other Ant Colony Optimization methods such as Elite Ant System, and Rank-based Ant System.

Continue reading

Taxonomy

The Cultural Algorithm is an extension to the field of Evolutionary Computation and may be considered a Meta-Evolutionary Algorithm. It more broadly belongs to the field of Computational Intelligence and Metaheuristics. It is related to other high-order extensions of Evolutionary Computation such as the Memetic Algorithm.

Continue reading

Taxonomy

Harmony Search belongs to the fields of Computational Intelligence and Metaheuristics.

Continue reading

Taxonomy

Memetic Algorithms have elements of Metaheuristics and Computational Intelligence. Although they have principles of Evolutionary Algorithms, they may not strictly be considered an Evolutionary Technique. Memetic Algorithms have functional similarities to Baldwinian Evolutionary Algorithms, Lamarckian Evolutionary Algorithms, Hybrid Evolutionary Algorithms, and Cultural Algorithms. Using ideas of memes and Memetic Algorithms in optimization may be referred to as Memetic Computing.

Continue reading

Within the framework of the course “Computer Networks III – Theory” (Department of Informatics and Communications, T.E.I. of Central Macedonia) we were asked to write a presentation related to the content of the course. The topic of my presentation was Genetic Routing.

I quote below a personal portable implementation (in C++) of a classic Differential Evolution algorithm used to maximize the function f(x) = sin(x) in the domain 0 <= x <= 2pi. You can compile the program with the g++ compiler.

Continue reading

I quote below a personal portable implementation (in C++) of a classic genetic algorithm (evolutionary algorithm) used to maximize the function f(x, y) = sin(x) * sin(y) in the domain 0 <= x, y <= 2pi. You can compile the program with the g++ compiler.

Continue reading

I quote below a personal portable implementation (in C++) of a classic genetic algorithm (evolutionary algorithm) used to maximize the function f(x) = sin(x) in the domain 0 <= x <= 2pi. You can compile the program with the g++ compiler.

Continue reading

I quote below a personal portable implementation (in C++) of a classic genetic algorithm (evolutionary algorithm) used to maximize the function f(x) = sin(x) in the domain 0 <= x <= 2pi. You can compile the program with the g++ compiler.

Continue reading

I quote below a personal portable implementation (in C++) of a classic genetic algorithm (evolutionary algorithm) used to maximize the function f(x) = sin(x) in the domain 0 <= x <= 2pi. You can compile the program with the g++ compiler.

Continue reading

Taxonomy

The Artificial Immune Recognition System belongs to the field of Artificial Immune Systems, and more broadly to the field of Computational Intelligence. It was extended early to the canonical version called the Artificial Immune Recognition System 2 (AIRS2) and provides the basis for extensions such as the Parallel Artificial Immune Recognition System [Watkins2004]. It is related to other Artificial Immune System algorithms such as the Dendritic Cell Algorithm, the Clonal Selection Algorithm, and the Negative Selection Algorithm.

Continue reading

Taxonomy

The Self-Organizing Map algorithm belongs to the field of Artificial Neural Networks and Neural Computation. More broadly it belongs to the field of Computational Intelligence. The Self-Organizing Map is an unsupervised neural network that uses a competitive (winner-take-all) learning strategy. It is related to other unsupervised neural networks such as the Adaptive Resonance Theory (ART) method. It is related to other competitive learning neural networks such as the the Neural Gas Algorithm, and the Learning Vector Quantization algorithm, which is a similar algorithm for classification without connections between the neurons. Additionally, SOM is a baseline technique that has inspired many variations and extensions, not limited to the Adaptive-Subspace Self-Organizing Map (ASSOM).

Continue reading

Taxonomy

The Genetic Algorithm is an Adaptive Strategy and a Global Optimization technique. It is an Evolutionary Algorithm and belongs to the broader study of Evolutionary Computation. The Genetic Algorithm is a sibling of other Evolutionary Algorithms such as Genetic Programming, Evolution Strategies, Evolutionary Programming, and Learning Classifier Systems. The Genetic Algorithm is a parent of a large number of variant techniques and sub-fields too numerous to list.

Continue reading