I’m Attila, and before I joined Fetch.AI five months ago, I was a physics PhD student, studying statistical and quantum physics and working on deep learning related projects.
At Fetch.AI I’m part of the Open Economic Framework (OEF) team. The OEF is an ecosystem where autonomous agents exist, discover and talk to each other. To achieve this, the OEF has three building blocks: SDKs (which allow you to create autonomous agents connected to the OEF), OEF Core (the main entrypoint) and OEF Search (which provides agent discovery functions). My focus as a member of this team is on Search, but I also authored the Kotlin SDK. This SDK enables developers to write agents in Java and Kotlin, making it straightforward to integrate agents into Android mobile apps.
OEF Search is a self-organising network of search nodes. Each node is a containerised system (meaning a node is running inside containers). This includes many pluggable components connected via a network, written in multiple languages (C++, Python). One of the components I created is an agent service storage which searches locally based on embeddings. In the OEF agents provide services. For example, a car agent can sell information about traffic. These seller agents want to advertise their services on the OEF — so other agents (buyers), who are willing to pay for such information, will be able to find them. When the buyer agents find the sellers they negotiate prices and trade via the OEF. This is perhaps best explained by imagining our OEF as the marketplace of artificial intelligence, where everything is negotiable and autonomous.
To enable advertising, agents describe their services to the OEF, where these descriptions are held in storage (we call them DAPs — short for “Data Access Points”). When a buyer agent issues a search on the network, if there is a string or a service description present in the query, the embedding DAP will find agents whose services are possible matches. This is done by encoding service descriptions into a vector space: given a description, the system takes out the text and uses word embeddings to calculate a high dimensional representation. In the current version of the system, pre-trained word2vec (a two-layer neural network, which calculates a vector for the input word) and glove (similar to word2vec, but it works a bit differently) models are used in this calculation. The system will support online learning, enabling this part of the search system to be self-adapting, learning from better representations of services (and thus providing better results) as it sees an increasing number of use cases. When a buyer issues a search request, the network will calculate the embedding of the query and it will find the closest points in the vector space. These close points are representing agents which provide services similar to those that the buyer is interested in. The overall ability of agents to search is paramount to the success of the network and our chief executive Humayun Sheikh has outlined Fetch.AI’s desire to become the Google of autonomous agents.
The video below shows Fetch.AI autonomous agents optimising vehicle journeys. In the video, when a car’s battery runs low, the car’s agent actively searches for nearby charging points with minimal queueing times. After ascertaining the current wait time from each charging station’s agent, the car’s agent then calculates the total detour time by adding up the time taken to get to the station, the wait time and the time to drive from the station to the destination. It can then choose the station offering the shortest detour overall (after factoring in the predicted time that would be spent waiting at a charging station). As the driver travels towards the charging station, the car’s agent continually monitors the wait times at other local stations and if the ‘shortest detour’ station stops being the shortest, it begins the search again. This process of inter-agent communication drastically reduces wait times at the charging stations, helping drivers reach their destination faster and more efficiently.
Another aspect I’m working on is targeted broadcasting. When a query reaches a search node, a search for immediately connectable agents is conducted locally. In addition, if the issuer asks for it, the node will broadcast the search to its connected nodes as well. The search broadcast is targeted, which means only those nodes will handle the search. This will help to produce better results. Each search node itself can be embedded to the high dimensional space (the average of the services registered in the node), so the node itself can calculate how close it is to the query topic and it can broadcast to other search nodes which are closer to the specific topic. For example, if the search is about healthcare, the query will move in the direction of that field, ensuring the matches become more and more refined.