What is EpiK Protocol?

2022年06月21日

As one of the important research fields of artificial intelligence, the research and development of knowledge graph has completed its first half.

In 2012, Google's Knowledge Graph product initially took shape, ushering in the era of knowledge graphs. Up to now, knowledge graphs have been widely used in various tasks of natural language processing, such as information search, automatic question answering, decision analysis, etc. in many fields such as finance, e-commerce, medical care, and government affairs.

Building a knowledge graph requires four steps: data extraction, data fusion, data reasoning, and data decision-making.


Therefore, knowledge graphs are derived from data. Data is one, algorithm is zero. Data determines the upper limit of the application of knowledge graph technology, and algorithms can help us approach this upper limit infinitely.

The quality of data will be directly related to the efficiency and quality of knowledge graph construction.

And these data are now usually obtained through multiple channels. For example, open data in the network, such as media websites and national government websites, are obtained in the form of crawlers. There are also data obtained through institutional cooperation or purchase of copyrights, such as Digital Science, Chinese Academy of Engineering, etc.

These data from different sources will be fused and associated through algorithms to provide the basis for the construction of knowledge graphs.

So the problem arises. Has the algorithm really matured enough to fully understand the logic in the data, organize knowledge or common sense in a certain industry field into a structured form, and then make reasoning and decision-making on this basis? Unfortunately, just as everyone's experience and cognition have "walls", the sources of information collected automatically are mixed and cannot even guarantee correctness. The current algorithm has not yet learned the complex human common sense and the derivation of constantly changing information. not to mention building a knowledge graph and making decisions based on such data.

Since the automatic collection of information to build a knowledge map has obvious defects, what about using manual labor to try to build a knowledge map? Artificial construction can enjoy the dividends of years of knowledge accumulation by experts in various fields, and avoid decisions made by algorithms that are contrary to human morality or emotion. It can even trace the source of the information to ensure the quality of the data.


("Resident Evil" Red Queen)

There are so many advantages, but there is still no artificial construction of knowledge graphs that have been widely adopted. What happened in the middle? Because the speed of manual construction is relatively slow, it often cannot keep up with the evolution of information, and the (labor) cost is higher.

If manual and automation can be combined, will the quality and efficiency of knowledge graph construction be steadily improved? Someone has thought of it and done it.

If it is in the form of piece-rate incentives, before the college entrance examination, a large number of teachers and professors in colleges and universities, current/graduated students, and social people can collect evaluations of colleges and universities, and then be identified and screened by experts in the field of education and application instructors. The information is integrated and packaged and sent to the application consulting agencies, college admissions agencies and education departments to form a knowledge map, which can generate reliable application suggestions for candidates nationwide, avoiding the loss of talents caused by information differences.

This is what Epik Protocol wants to do. This is a knowledge graph collaboration platform based on blockchain technology. Through incentive measures, it organizes users around the world to collaborate, sorts knowledge in various fields into a knowledge map, and stores it permanently in a distributed manner, providing a steady stream of high-quality data for the existing artificial intelligence language model, to promote the knowledge map in Application in the field of production and life.

There are three main roles in the system of Mingzhi Protocol. Bounty hunters can freely receive tasks in various knowledge fields and complete the submission and acceptance; domain experts are responsible for the dismantling and acceptance of professional tasks, and upload data to knowledge miners; knowledge miners can Exercise governance power and participate in the selection and voting of experts in the field to maintain the healthy development of the ecology. From the production of data to the acceptance to the selection of core experts, a set of effective closed-loop production management has been formed.


They also innovatively embody the boring data screening process in the form of games. By participating in the identification and screening of data, bounty hunters can obtain corresponding rewards. And don't worry about the difficulty being too difficult, because experts have already done the difficulty split. In the process of the game, complete the manual processing of data, and earn rewards while upgrading. In this way, Mingzhi Protocol attracts more bounty hunters to participate in it to make up for the efficiency and quality of manual construction of knowledge graphs.


Where does the filtered and processed data go? Blockchain technology plays to its strengths. In order to meet the needs of orderly storage in specific fields, the Mingyi protocol has clear requirements for data quality, and the storage is completely free. They collect orderly small log files of knowledge graphs in various fields, and then regularly synthesize small log files in various fields into large snapshot files and upload them to the distributed storage on the open market Filecoin for snapshot backup, giving full play to Filecoin’s cold storage of large files. While the advantages of storage, it also continues to provide valid data.

Knowledge mining is an intermediary for uploading data, and it is a crucial part of the Mingzhi protocol system, and of course it will not return empty-handed. Bandwidth subsidies and knowledge fund reserves will be their labor income.

Manual + automation, this semi-automatic knowledge graph construction method will still be the mainstream for a period of time in the future, especially in the fields of medical care, security, and finance, where data quality requirements are high, and manual review is required to ensure accuracy.

The research on the construction of knowledge graphs has come to the second half. Do a good job of the closed-loop knowledge graph collaboration tool to make collaboration easier. This will become an inescapable part of this track, and it is also a part of the competition of various enterprises and institutions.

Whoever stands out in this field will have the opportunity and ability to be the first to get the ticket to enter the Web 3.0 era.


Recommended news

Introduction: A stable power supply is crucial for the efficient operation of any electrical equipment, including the popular Whatsminer. In the electric power and distribution equipment industry, having a reliable power supply is essential to ensure uninterrupted performance. This article aims to educate professionals about the significance of a dependable Whatsminer power supply and provide insi


Table of Contents 1. Understanding the Importance of Voltage Regulation 2. The Role of Voltage Regulators in Whatsminer 3. Factors Affecting Voltage Regulation 4. Best Practices for Voltage Regulation 5. Common Voltage Regulation Issues and Solutions 6. FAQs about Voltage Regulation for Whatsminer 7. Conclusion 1. Understanding the Importance of Voltage Regulation Voltage regulation plays a crucia


Global search