“Everything changes and nothing stands still.” ― Heraclitus.
Welcome to efxa.org, my personal website! I am Efstathios, a computer scientist and philosopher living in Greece, and I have a deep passion for Software Engineering and Computational Intelligence. Enjoy my posts!
I am so proud that Crowdspeak‘s alpha demo is up and running!
On Crowdspeak you will be able to make your own crowd-questions. For this demo, there is just a fixed example question “What are the biggest challenges of working from home?”.
Visit Crowdspeak and type your own response! Crowdspeak will then take your response into account, and a new collective response will be generated in just a few minutes.
This collective response is a machine generated text made to express the majority of the individual responses given by the people!
Don’t forget to give us your feedback and to subscribe to the Crowdspeak newsletter for more!
Conditional Language Models are not used only in Text Summarization and Machine Translation. They can be used also for Image Captioning!
Here is a great example from Machine Learning Masteryof how we can connect the Feature Extraction component of a SOTA Computer Vision model (e.g., VGG, ResNet, Inception, Xception, etc) with the input of a Language Model in order to generate the caption of an image.
The whole deep learning architecture can be trained end-to-end. It is a simple encoder-decoder architecture but it can be extended and improved using an attention interface between encoder and decoder, or even using Transformer layers!
Adding attention not only enables the model to attend differently various parts of the input image but also explain its decisions. For each generated word in output caption we can visualize the attended visual part of input image.
Natural languages (speech and text) are the way we communicate as species. They help us to express whatever is inside to the outer world.
Natural languages are not designed. They emerge. Thus, they are messy and semi-structured. If they were designed, NLP would be already solved, using context-free grammars and finite automata by linguists 50 years ago.
Today, we are trying to artificially “learn” language from text using state-of-the-art Deep Neural Language Models that behave probabilistically, predicting the next token in a sequence.
Moreover, natural languages are not static. They evolve and change. Different words can be used in different times with different meaning. It is a moving target.
Plato, the Greek philosopher was negative with “languages” -despite the fact the he has written so much- because a language cannot express the fullness of a human mind, of a person. Socrates and many philosophers from the Peripatetic school never wrote texts. The only way they were communicate was by real human communication (body language, eyes, speech, touch). Only with this way, a human mind and heart can evolve and create new worlds.
However, we are living in a century where everything is either digitalised or written and human communication goes to minimum.