Maya scriptMaya script, also known as Maya glyphs, is historically the native writing system of the Maya civilization of Mesoamerica and is the only Mesoamerican writing system that has been substantially deciphered. The earliest inscriptions found which are identifiably Maya date to the 3rd century BCE in San Bartolo, Guatemala. Maya writing was in continuous use throughout Mesoamerica until the Spanish conquest of the Maya in the 16th and 17th centuries.
Deep learningDeep learning is part of a broader family of machine learning methods, which is based on artificial neural networks with representation learning. The adjective "deep" in deep learning refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.
Maya civilizationThe Maya civilization (ˈmaɪə) was a Mesoamerican civilization that existed from antiquity to the early modern period. It is known by its ancient temples and glyphs (script). The Maya script is the most sophisticated and highly developed writing system in the pre-Columbian Americas. The civilization is also noted for its art, architecture, mathematics, calendar, and astronomical system. The Maya civilization developed in the Maya Region, an area that today comprises southeastern Mexico, all of Guatemala and Belize, and the western portions of Honduras and El Salvador.
Convolutional neural networkConvolutional neural network (CNN) is a regularized type of feed-forward neural network that learns feature engineering by itself via filters (or kernel) optimization. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer 10,000 weights would be required for processing an image sized 100 × 100 pixels.
Types of artificial neural networksThere are many types of artificial neural networks (ANN). Artificial neural networks are computational models inspired by biological neural networks, and are used to approximate functions that are generally unknown. Particularly, they are inspired by the behaviour of neurons and the electrical signals they convey between input (such as from the eyes or nerve endings in the hand), processing, and output from the brain (such as reacting to light, touch, or heat). The way neurons semantically communicate is an area of ongoing research.
Feedforward neural networkA feedforward neural network (FNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. Its flow is uni-directional, meaning that the information in the model flows in only one direction—forward—from the input nodes, through the hidden nodes (if any) and to the output nodes, without any cycles or loops, in contrast to recurrent neural networks, which have a bi-directional flow.
Maya codicesMaya codices (singular codex) are folding books written by the pre-Columbian Maya civilization in Maya hieroglyphic script on Mesoamerican bark paper. The folding books are the products of professional scribes working under the patronage of deities such as the Tonsured Maize God and the Howler Monkey Gods. Most of the codices were destroyed by conquistadors and Catholic priests in the 16th century. The codices have been named for the cities where they eventually settled.
Transformer (machine learning model)A transformer is a deep learning architecture that relies on the parallel multi-head attention mechanism. The modern transformer was proposed in the 2017 paper titled 'Attention Is All You Need' by Ashish Vaswani et al., Google Brain team. It is notable for requiring less training time than previous recurrent neural architectures, such as long short-term memory (LSTM), and its later variation has been prevalently adopted for training large language models on large (language) datasets, such as the Wikipedia corpus and Common Crawl, by virtue of the parallelized processing of input sequence.
Data and information visualizationData and information visualization (data viz or info viz) is the practice of designing and creating easy-to-communicate and easy-to-understand graphic or visual representations of a large amount of complex quantitative and qualitative data and information with the help of static, dynamic or interactive visual items.
Deep belief networkIn machine learning, a deep belief network (DBN) is a generative graphical model, or alternatively a class of deep neural network, composed of multiple layers of latent variables ("hidden units"), with connections between the layers but not between units within each layer. When trained on a set of examples without supervision, a DBN can learn to probabilistically reconstruct its inputs. The layers then act as feature detectors. After this learning step, a DBN can be further trained with supervision to perform classification.
Maya stelaeMaya stelae (singular stela) are monuments that were fashioned by the Maya civilization of ancient Mesoamerica. They consist of tall, sculpted stone shafts and are often associated with low circular stones referred to as altars, although their actual function is uncertain. Many stelae were sculpted in low relief, although plain monuments are found throughout the Maya region. The sculpting of these monuments spread throughout the Maya area during the Classic Period (250–900 AD), and these pairings of sculpted stelae and circular altars are considered a hallmark of Classic Maya civilization.
Maya numeralsThe Maya numeral system was the system to represent numbers and calendar dates in the Maya civilization. It was a vigesimal (base-20) positional numeral system. The numerals are made up of three symbols: zero (a shell), one (a dot) and five (a bar). For example, thirteen is written as three dots in a horizontal row above two horizontal bars; sometimes it is also written as three vertical dots to the left of two vertical bars. With these three symbols, each of the twenty vigesimal digits could be written.
Text-to-image modelA text-to-image model is a machine learning model which takes an input natural language description and produces an image matching that description. Such models began to be developed in the mid-2010s, as a result of advances in deep neural networks. In 2022, the output of state of the art text-to-image models, such as OpenAI's DALL-E 2, Google Brain's , StabilityAI's Stable Diffusion, and Midjourney began to approach the quality of real photographs and human-drawn art.
Maya calendarThe Maya calendar is a system of calendars used in pre-Columbian Mesoamerica and in many modern communities in the Guatemalan highlands, Veracruz, Oaxaca and Chiapas, Mexico. The essentials of the Maya calendar are based upon a system which had been in common use throughout the region, dating back to at least the 5th century BC. It shares many aspects with calendars employed by other earlier Mesoamerican civilizations, such as the Zapotec and Olmec and contemporary or later ones such as the Mixtec and Aztec calendars.
Maya warfareAlthough the Maya were once thought to have been peaceful, current theories emphasize the role of inter-polity warfare as a factor in the development and perpetuation of Maya society. The goals and motives of warfare in Maya culture are not thoroughly understood, but scholars have developed models for Maya warfare based on several lines of evidence, including fortified defenses around structure complexes, artistic and epigraphic depictions of war, and the presence of weapons such as obsidian blades and projectile points in the archaeological record.
Generative pre-trained transformerGenerative pre-trained transformers (GPT) are a type of large language model (LLM) and a prominent framework for generative artificial intelligence. The first GPT was introduced in 2018 by OpenAI. GPT models are artificial neural networks that are based on the transformer architecture, pre-trained on large data sets of unlabelled text, and able to generate novel human-like content. As of 2023, most LLMs have these characteristics and are sometimes referred to broadly as GPTs.
Deep reinforcement learningDeep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs (e.g.
Residual neural networkA Residual Neural Network (a.k.a. Residual Network, ResNet) is a deep learning model in which the weight layers learn residual functions with reference to the layer inputs. A Residual Network is a network with skip connections that perform identity mappings, merged with the layer outputs by addition. It behaves like a Highway Network whose gates are opened through strongly positive bias weights. This enables deep learning models with tens or hundreds of layers to train easily and approach better accuracy when going deeper.
Ancient Maya artAncient Maya art is the visual arts of the Maya civilization, an eastern and south-eastern Mesoamerican culture made up of a great number of small kingdoms in present-day Mexico, Guatemala, Belize and Honduras. Many regional artistic traditions existed side by side, usually coinciding with the changing boundaries of Maya polities. This civilization took shape in the course of the later Preclassic Period (from c. 750 BC to 100 BC), when the first cities and monumental architecture started to develop and the hieroglyphic script came into being.
Recurrent neural networkA recurrent neural network (RNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. In contrast to uni-directional feedforward neural network, it is a bi-directional artificial neural network, meaning that it allows the output from some nodes to affect subsequent input to the same nodes. Their ability to use internal state (memory) to process arbitrary sequences of inputs makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition.