Review of Ugarit and Translation Alignment using Petronius

By Lucy Yang (KCL)

Texts aligned in Ugarit iAligner for this project: Satyricon 132.1, 132.2, 132.3, 132.4

The Satyricon by Petronius enjoys its modern reception as both a social satire to Neronian Rome, as most scholars believe its date to be, and as a pioneering example of the ancient novel. While the Satyricon’s content and form contribute to its fascination, they also complicate its translation, particularly in terms of preserving semiotic and structural integrity (Kay, Computational Linguistics, 1993). In terms of its contents, the Satyricon has had a reputation for its sexual explicitness and subversion rarely found in classical texts from the ‘classical canon’. Its fragmentary state and the nature of its content kept the Satyricon from a rightful examination as an ancient text, before its ‘scholarly worth’ was established by J.P. Sullivan in 1968 (The Satyricon of Petronius). Even then the Satyricon has continued to present interpretive challenges.

On the other hand, translation of the Satyricon to English is inherently a form of interpretation. I argue that translation of the Satyricon is especially important because it engages with the aspects of semiotic integrity and censorship, as demonstrated above, particularly relevant to the Satyricon. By studying translations, we get an insight of how the text is manipulated for an English-language readership. How is the form of the Satyricon mediated through translation, what has been presumed by the translation about the purpose of the text, how do different translations change the reader’s interpretation? Using text alignment to examine different translation intentions, this project evaluates how this digital tool may be applied in translation studies and Classical reception.

As Hardwick notes in her seminal 2000 book on translation theory and reception theory, Translating words, translating cultures, the main function of a translation of ancient texts is to provide ‘a contemporary means of understanding and responding to the ancient work’ (11-12). Translations offer a lens through which we can observe how a text was understood in different periods. The Satyricon was renowned for its ‘obscenity’ more so than its textual genius. This reputation often overshadowed its textual sophistication, leading to censorship through omission or alteration in translation (see Firebaugh 1922, below). The translations chosen for the project are more than 100 years apart, aiming to
demonstrate how the increased scholarly recognition has changed the ways in which the Satyricon is translated.

The two chosen translations are named A and B respectively. Translation A is by W. C. Firebaugh in 1922 (text from WikiSource). Translation B is by Gareth Schmeling, published by Loeb Classical Library in 2020. I have selected the Satyricon 132 for its moderate length to keep the project realisable and the conclusion more specific to textual analysis. Moreover, this passage incorporates both verse and prose—a prominent feature of the Satyricon ‘s form that makes it one of the few ancient novels. Content-wise, the passage includes features that contributed to the Satyricon’s early censorship as well as its later scholarly interest. It portrays both the ‘indecent’ element, Encolpius’ rage and shame over his sexual impotence and the literary aspect of the Satyricon’s intricate engagement with classical literary allusions (Conners, Petronius the Poet, 1998: 145; Rimell, Petronius and the Anatomy of Fiction, 2002). This project aims to demonstrate how digital textual alignment can highlight alterations and adaptations from Latin to English at both the formal and content levels. From evaluating the alignment’s ability to identify and display these differences, it aims to highlight how practicing textual alignment may help non-linguist classicists working mainly with translations to overcome the untranslatability of language barriers.

The methodology with translation alignment consists of the pairing of equivalence between the original text to the translated text(s), on word, phrase or sentence levels (Yousef and Janicke, Survey, 2021). General guidelines  can be accessed on the community web site for different language pairings of original/translation. In practice, how and what to align can be project-specific. The guidelines for Latin-English alignment as provided by Valeria Irene Boano for Ugarit are mostly applied here. Most notably, this means aligning both the verb and its subject in English with the corresponding verb in Latin (Boano’s guidelines, 8.2). The Latin text and both translations are put through a single document to clean out the noise including page numbers and annotations, the punctuations are left in the text for an easier manual alignment process but not aligned. To keep a consistency for the alignment rate, definite and indefinite articles (‘a’, ‘an’, ‘the’) are not aligned. Latin verbs that imply the subject are aligned with the subject in English when possible. This project evaluates interpretative integrity in different translations, reflecting the methodology of a non-linguist classicist with limited knowledge of the ancient language. To fulfil these needs, only words with direct semiotic correspondence are aligned with help from Latin-English dictionary.

Graphic simulation makes the process of textual alignment efficiently interactive as it highlights the correspondence between texts (Yousef and Janicke, Survey, 2021). Ugarit provides a side-by-side view that gives immediate response to the aligned tokens for both editing and viewing (Yousef 2020). Ugarit also generates bar charts for statistics of alignment pairs, useful for an overview on the granularity of alignment (Fig. 1). For translation studies, the most useful may be the chart-view of aligned pairs, which provides a list to display how different translations render the same token from the original language, or the lack of direct correspondence i.e. omitted by the translation. The presence of alignment between only translations but without correspondence to the original text can be especially telling of the translation process, which will be discussed later.

Fig. 1: Table-view of translation pairs on Ugarit, downloadable data.

Doing text alignment provides an opportunity to review metrical composition otherwise hard to detect in an unknown language. The passage in question contains a verse section in Sotadean meter, which is an important formal observation as it adds a new layer to the complexity of the Satyricon’s embedded literary tradition. The Sotadean metre inverts the Homeric dactylic hexameters and is initially used for obscene or parodic verses, which gives more literary nuance to Sat. 132.2 (my alignment), as it gives an epic narrative to Encolpius’ failed castration in shame (Sapsford 2022, 3; 157). Both translations fail to capture the exact meter; but ancient metre getting lost in translation has been a long-term issue, that the compromise is taken as unavoidable (Crane et al. 2023, Beyond Translation, 170). While secondary literature may highlight the lost metre, in-text annotation that highlights syllables of different length works more efficiently, such as metric annotation for Homeric texts by David Chamberlain, though the annotated corpus is limitedly available. Translation alignment shows the correspondence intertextually between pairs, but this does not stand in for metrical data, though the visual correspondence to some extent highlights linguistic aspects such as the position of verbs which may have significance in the reader’s understanding of poetry (Fig. 2). However, despite its verse form, translation A has a much lower alignment rate than B. This may not be noticed without alignment because most imageries are kept intact in A (where all three texts are aligned) and provides the impression that both translations are as loyal to the original. But A invents and omits parts of the texts that undermines the textual integrity which the original Latin depends on to form intertextuality. By processing which Latin texts are aligned, the alignment highlights clearly which parts are translation’s invention or omission.

How does each translation process the question of the Satyricon’s form is a question even more relevant with the Satyricon’s non-verse sections.  Translation alignment shows that, in the beginning of Sat. 132.1, translation B follows the parallelism in Latin, beginning with iam (now); but translation A abandons this structure and merges the actions into one continuous train of actions with ‘our’. This alteration may seem minor but though A keeps to the parallelism that begins each section of a sentence with the same word, the word is unaligned in Latin, suggesting that A put emphasis on contemporary readability that treats the Satyricon as a novel for entertainment. In contrast, Translation B is highlighted for a word-to-word approach as shown by the unaligned parts which are largely adverbs and subjects, already suggested or inflected in Latin.

However, unaligned texts, even adverbs and subjects, should not be all dismissed as simply semantic differences. Looking at the unaligned Latin words—Sat. 132.2: illa, singular ‘she’, is used to refer to Encolpius’ penis. The feminine pronoun holds significance in Roman semantic context but is missed in both translations. In  Sat. 132.2, Translation A omits fugi (I fled), and its implication on the mutual cowardice in Encolpius and his genitals, which tweaks the dynamic of the mock-epic castration from self-shaming to a relentless pursuit to combat his impotence.

Fig. 2: Visualisation of alignment between original verse, non-verse translation and verse translation.

The side-by-side view between translations may reveal critical differences between versions. In the beginning of 132.4 (Fig. 3), translation A and B use opposite pronouns, one female (B) one male (A) for Encolpius’ penis, and they are carried on down the verse. Similarly, the word solo in 132.4 is understood as ‘soil’ in B and ignored in A, which alters the meaning. Translation A likely makes the change for the reason already argued: to make the text comprehensible in English, not so much to reflect the Latin semantics.

Translation B’s approach may be more efficient for non-Latinist classicists to work literary criticism with. However, words that rely on Latin’s inflected structure or carry meanings embedded in Latin literary semantics are still compromised in translation for the sake of coherence within English syntax and usage. The noun venereum has been translated to ‘make love’ (B) and ‘love’ (A), which are both semantically coherent but omit the evocation to Venus in Latin. In Sat. 132.3: furciferae is an altered form of the invective furcifer that’s commonly used on slaves, again, actively aligning the text may direct the reader to the untranslated servitude imposed with invective. In Sat. 132.1, catomizari is an unusual conjugation for catomidio that cannot be found in Latin online dictionaries. Both translations give it the exact wording ‘hoisted and flogged’. During the alignment, the curious absence of the word from dictionaries may direct the reader to secondary literature, which would lead the discourse on multiple versions of Latin texts, raising awareness to annotations for Latin that may otherwise not be as significant to a non-Latinist’s knowledge.

Fig. 3: The presence and absence of the feminine pronoun, highlighted.

By practicing active translation alignment, one is naturally drawn to the challenges. As demonstrated above, unaligned text or contradicting meanings between translations often flag the ‘untranslatability’ and directs to more in-depth understandings. Between alignment of translations, we may also learn about which ‘original’ text each translator works from. The end of 132.3, according to Loeb’s Latin is ‘rogo te, mihi apodixin « non » defunctoriam redde’. Alignment indicates a level of correspondence with both translations but conveying very different meanings: A asking for ‘sign…however faint’ and B for ‘serious proof’. It suggests that the two translations may be using different Latin texts, especially with the ambiguous addition of ‘« non »’. As made identifiable by the list of translation pairs, Sat. 132.4 has alignment occurring only between translations: ‘words’ is included for translations but not found in Latin. It may suggest cross-referencing between the later to the earlier version, or the same translation convention has been used.

My project conducts a practical manual translation alignment with an ancient text and two temporally distant translations, in evaluation of how active alignment can assist with what would have been the persistent issue of untranslatability for people without knowledge of the ancient language(s). While demonstrating that the 2020 translations have an emphasis on conveying meanings as exact as possible, as coherent to the scholarly recognition of the Satyricon, the 1922 translation prioritises the Satyricon as both entertaining and equivalent to a modern novel. The experience of manually aligning the texts is proven useful for a non-linguist to give a more semantically inherent—a sense of lingual ‘nativeness’—analysis to their unknown languages. From the interactive alignment practice aided by immediate visual response, both aligned and unaligned texts are useful for discussions. The alignment rate tells more nuance than how literate the translation is to the original. By doing active alignment, the user can efficiently spot addition and omission in translations, both in form and content, including specific tokens to detect translational aspect such as censorship. Furthermore, vast difference between translations could imply different versions of ‘original’ texts being used, directing attention to the manuscripts where the Latin texts are sourced, an aspect not often addressed by publications of translations. Including alignment as part of the literary analysis can redirect non-linguists to research for secondary annotation and literature, as alignment flags up specification that helps to direct their search in an otherwise convoluted. While doing manual alignment gives the advantage of promoting a highly focused interaction with the text, its time-consuming nature deems it more suitable for shorter passages. For longer or entire-work alignment, there are pre-aligned versions for canons such as Homer, and auto-alignment tools, such as the Ugarit auto-aligner to conduct the work for the others. As important and information-rich as the overview on alignment data is, having first-person experience with manual alignment is both a methodology to evaluate translation and a gateway to further research that gives great nuance to the data.

Posted in SunoikisisDC, Tools | Tagged , , , | Leave a comment

Report on the Erasmus+ Blended Intensive Program (BIP) “Intensive ENCODE: Digital Competences in Ancient Writing Cultures”

Introduction

Between April and May 2025, the University of Parma hosted the Erasmus+ funded Blended Intensive Programme (BIP) “Intensive ENCODE: Digital Competences in Ancient Writing Cultures.” This initiative is a spin-off of the ENCODE project (2020–2023), an international collaboration that aims to bridge the gap between traditional humanities education and the digital skills increasingly essential for the study of ancient written cultures.

Structured in two phases, the programme combined online learning (1 April–12 May) with an in-person week in Parma (18–24 May). It brought together around 70 Bachelor’s and Master’s students from across Europe, representing partner institutions in Germany (Cologne, Leipzig, Würzburg), Norway (Oslo), Italy (Bologna), Greece (Komotini), Spain (Madrid), Lithuania (Vilnius), and Bulgaria (Sofia).

What united this diverse group was a shared curiosity: how can digital tools reshape the way we explore the texts, scripts, and writing systems of antiquity?

In the blog post that follows, Elena Di Giorgio, Giulia Contesini, Alberto Negri, Violina Hristova, Stephanie Daneva, Nicoletta Nannini, Stephania Daviti, and Athena Mega, participants in the BIP, share their thoughts on some of the key themes that emerged from the guest lectures and collaborative activities.

Participants of the Erasmus+ BIP "Intensive Encode" on a stair

Fig. 1: Students and teachers of the Erasmus+ Blended Intensive Programme (BIP) “Intensive ENCODE: Digital Competences in Ancient Writing Cultures”

Relationship with writing

Nowadays, Artificial Intelligence is revealing itself as an essential tool for many tasks pertaining to any kind of field. When it comes to historical studies, AI might be trained for the purpose of automatic recognition of texts.

Pursuing such a goal, Isabelle Marthot-Santaniello applied Deep Learning-based methodologies to papyri. In the D-scribes project she worked towards identifying all the scribes responsible for the notarial documents of the archive of Dioscoros of Aphrodito. To do so, first and foremost she established a ground truth, consisting of a preliminary dataset of images representing the known handwriting of each scribe. These samples were narrowed down to καὶ-s along with some single letters (ε, κ, μ, ω), which display marked palaeographic features and show few, if any, variations in their ductus. With these specimina a confusion matrix was produced, a table that is used to define the performance of a classification algorithm. Currently, this modus operandi works well with inter-writer discrimination, but struggles with intra-writer variations. The same principle was applied to dating papyri.

The idea of automatic letter recognition was applied also in Egyptology. The Demotic Palaeographical Database Project led by Franziska Naether allows users to draw Demotic signs, so that the AI tool can attempt to identify them, even suggesting possible matches. In terms of Unicode, hieroglyphs are not yet fully represented; moreover thousands of signs, such as those found in Demotic and Hieratic scripts, still need to be accounted for. A similar challenge exists in Mycenaean studies. As a result, scholars in both fields continue to rely primarily on transliterations. 

Moreover, Jérôme Mairat illustrated how helpful AI proved itelf in modeling digital editions in Numismatics (RPC online). He used AI to generate interpretations, and even translations, of Roman coin legendae, brief but dense inscriptions containing several abbreviations. Remarkably, a fine-tuned version of OpenAI’s GPT-4o-mini proved capable of recognising both Greek and Latin characters, delivering promising results with an estimated 95% accuracy rate. Of course, occasional errors and the well-known phenomenon of AI “hallucinations” still occurred. To refine the output, he employed APIs to automate the creation of both a textual edition (following Leiden conventions) and a basic EpiDoc XML file. For these more mechanical tasks, he favours using a dedicated web app, which is both more cost-effective and more reliable, unlike OpenAI, it performs the job without error. 

Non-textual aspects of the writing support

The material characteristics of ancient artifacts bearing text on their surfaces must be duly taken into consideration within historical disciplines concerned with the study and interpretation of the text itself. 

Papyrology, for example, is a discipline marked by a high degree of fragmentation of writing supports, and for this reason it requires constant comparison between artifacts with similar content, in order to reconstruct missing text in any existing gaps. A similar approach, though less pronounced, also applies to epigraphy in the assessment of missing or hard-to-read portions of stone. 

These specific aspects of studying ancient sources are by no means marginal in DH; on the contrary, the digitization of such sources calls for methodological reflection. A useful point of reference is the Digital Marmor Parium project, led by Monica Berti. In working with this epigraphic document, particular attention is paid to the editorial layer during the encoding process, especially in areas where the text is fragmentary. When reconstructing missing portions, it is crucial not only to propose plausible readings but also to clearly indicate the editorial choices made and the degree of certainty involved.

Among the non-textual data of the writing support, iconography also occupies a place of common interest. A relevant example of the attention dedicated to this field of studies is the Orasis project, one of the digital epigraphy projects developed in Bulgaria which were presented by Elina Boeva. Orasis is a digital platform for visualizing and presenting inscriptions from monuments of Christian art from the Byzantine and post-Byzantine period on the territory of Bulgaria and other Balkan countries. In this useful resource, it is possible to find a detailed description of the location of the image in the context of the iconographic program of the church. 

Two more examples of digital endeavors in the humanities are worthy of mention. The first is the database MetrICa (Metrical Inscriptions of Campania) carried on by Pietro Liuzzo. MetrICa aims at investigating the relationship between the text and its material supports, graphic rendering, paleography, and the original archaeological context, when it can be reconstructed, emphasizing the importance of reuse of open data and collaboration.

Digital Scholarly Editions

In recent years, numerous projects in the field of Digital Humanities have emerged and evolved, each dedicated to different types of artefacts from antiquity. The diversity of research objects, project goals, and established scholarly traditions requires a range of methodological approaches, sharing a common objective: to produce high-quality, scholarly, and accessible digital publications, following the FAIR principles.

The already mentioned Digital Marmor Parium project, for example, uses annotation and named entity extraction to build a database designed not only for specialists in linguistics and literature, but also as a resource that can be integrated into broader research initiatives. The process of extracting this type of data from epigraphic monuments and papyri presents several challenges: not all resources and authority lists are openly accessible for instance, Trismegistos, and others suffer from structural issues. 

One particularly ambitious project is The Digital Rosetta Stone by Franziska Naether, Monica Berti, and their team, which combines 3D models of the monument, high-resolution imagery, visual alignment of the hieroglyphic, Demotic, and Ancient Greek texts, and tools such as Treebanking, semantic annotation and text alignment (Ugarit).

Similarly, the Damos project by Federico Aurora, a database of Mycenaean inscriptions, demands a tailored approach to text encoding due to the unique features of Mycenaean Greek. When encoding these inscriptions using EpiDoc, numerous difficulties and inconsistencies arise in comparison to encoding inscriptions in Classical Greek or Latin. A notable distinction lies in the requirement to annotate each word explicitly in Mycenaean texts, since some are represented by logograms a necessity not present when working with texts in Classical Greek or Latin.

In Bulgaria, key examples include the Tituli (Latin inscriptions), Telamon (Greek inscriptions), and Orasis (Byzantine texts found in churches), presented by Elina Boeva.  The projects focus on producing digital publication of the inscriptions, using an EpiDoc template, tailored specifically for the needs of each project. 

The model of digital publishing increases accessibility, allowing broader public and academic engagement without requiring physical access or paid subscriptions. However, it also comes with challenges, including the ongoing need for institutional funding to support hosting, maintenance, and database updates.

Content and meaning

Using digital tools for the study of Greek and Latin Epigraphy and Papyrology aims to offer inspiring content, fostering awareness of the importance of digital competencies, training in the field of research, as well as study of ancient writing cultures. The analysis brings out a range of linguistic and semantic aspects that were previously difficult or time-consuming to explore and promotes the digital transformation of cultural heritage by bridging the disciplinary gaps.

Digital technologies have significantly advanced paleography and script recognition. Computer vision techniques facilitate the identification of characters in damaged manuscripts, while machine learning models enable precise dating of handwriting styles such as uncial and cursive scripts. Morphological analysis benefits from automatic lemmatization and tagging, particularly in inflected languages like Ancient Greek and Latin. Additionally, digital syntax parsing improves reconstruction of incomplete structures and detection of formulaic expressions in legal and religious texts. These methods also support the study of linguistic variation, including dialectal differences, orthographic shifts, and code-switching phenomena. 

Moreover, digital tools play a crucial role in exploring semantic aspects of ancient texts. Named Entity Recognition (NER) automatically identifies personal names, places and institutions, facilitating the construction of prosopographical databases by linking individuals across documents. Topic modeling and thematic clustering enable the detection of recurring themes, such as taxation, marriage, or military service, across large corpora, supporting semantic classification and contextual analysis. Tools for intertextual and citation analysis uncover textual reuse, quotations, and allusions to earlier literary, legal, or religious sources, shedding light on intellectual networks and the transmission of knowledge over time.

Digital projects analyzing ancient texts increasingly employ computational methods to investigate linguistic and semantic features. Named Entity Recognition (NER), used by Trismegistos, Papyri.info and the Digital Marmor Parium project identifies and standardizes personal and place names, aiding prosopographical and geospatial analysis. Morphosyntactic annotation, as in the Perseus Digital Library, addresses the complexity of inflected languages like Ancient Greek and Latin. Semantic disambiguation techniques, such as word embeddings and topic modeling (e.g., Tesserae), resolve lexical ambiguity. Thematic clustering, seen in the Digital Corpus of Literary Papyri, supports text classification. Projects like Pelagios emphasize intertextuality and citation networks. Multilingual corpora (e.g., Greek-Coptic) and infrastructures like CLARIN, DARIAH-EU, and ENCODE further enhance scholarly research.

Linked Open Data and Semantic Web technologies are crucial for digital analysis of ancient texts. EpiDoc, based on TEI-XML, standardizes encoding of inscriptions and papyri with rich metadata. Projects like Trismegistos use RDF to link entities via ontologies such as Pleiades and SNAP:DRGN, promoting integration. The Pelagios Network connects texts to geographic data for spatial analysis. Tools like Papyri.info Editor and Recogito support collaborative annotation, while databases such as EDH and EAGLE offer structured, API-accessible resources enhancing interoperability and epigraphic research.

Variety of languages, chronology, and civilizations

Studies of ancient civilizations must contend with a history made up of places, chronologies, and, above all, extremely diverse languages. When these fields of study intersect and engage with the domain of digital humanities, the result is the need for digital tools with specific features: they need to be capable of conveying the diversity of content, depending on the contexts under examination, in a way that is as simple as possible for developers to use and as accessible as possible for the audiences who rely on them. 

One of the most significant aspects in the encoding of ancient texts lies not only in the goal of making the texts increasingly accessible, but also in connecting features that may be shared across different languages and civilizations, as through the analysis and study of linguistic aspects. 

For instance, it is precisely from this perspective that was created the WordNet project, a large lexical database or “electronic dictionary”, developed at Princeton University for Modern English. The aim of WordNet is to collect and interlink not just words based on their meanings, but specific senses of words. WordNet labels the semantic relations among words and provides information about two fundamental properties of the lexicon: synonymy and polysemy. In the study of ancient languages, it proves especially useful to link WordNets with other textual and lexical resources and it reveals connections between the semantics of lexical items and their syntactic context. Especially in philology, this connection may help researchers filling gaps in the written records, as entire sets of near-synonymic and semantically related words will be easily made available. 

Furthermore, the project faces a pivotal challenge: whether to adopt English synsets as a foundational structure an approach that risks significant inaccuracies or to construct an entirely new synset framework, which would require extensive linguistic and technical effort. Additionally, ancient languages contain conceptual distinctions that are not lexicalized in English, e.g. avunculus and patruus refer to maternal and paternal uncles – distinctions not explicitly encoded in English.

Posted in EpiDoc, Events, report, Teaching | Tagged , , , , , | Leave a comment

University of London Digital Humanities Placement report

By Shibo Wu and Matthew O’Connor

This spring we, two students of the Digital Humanities MSc at UCL, undertook a three-week placement at Senate House Library and the Makerspace in the University of London, digitising artefacts from the Harry Price Archive (part of the Harry Price Collection) in Senate House Library under the supervision of Gabriel Bodard and Tansy Barton. This placement was part of an ongoing initiative to enhance the accessibility and preservation of rare and fragile items in University of London archives.This marked the continuation of the digitisation initiative that began with the Ehrenberg Collection at the Institute of Classical Studies, introducing the first digitised artefacts from the Harry Price Collection.

The placement commenced with two days of inductions to both the Ehrenberg Collection and the Harry Price Collection, familiarising us with their historical significance, cataloguing practices, and handling procedures. The Harry Price Collection was established by the psychical researcher Harry Price (1891-1948), a contemporary of Harry Houdini, containing articles and artefacts related to magic, spiritualism and psychical research. Donated to the Senate House Library in 1936, and bequeathed to the University of London in 1948, the collection includes various items from Price’s own investigations into occult and/or spiritual phenomena, as well as random objects relating to his antiquarian and other interests.

We conducted several photography sessions to capture high-quality images of select artefacts in the Harry Price Collection. These items were placed on a rotating turntable, photographed with a Nikon D850 digital SLR camera, and imported into the Agisoft Metashape Pro software for processing. We were able to produce 3D models for our artefacts using this software, which were then uploaded onto the sharing platform Sketchfab.

Some of these items, particularly those with metallic surfaces, proved challenging to model due to inconsistent reflections which disrupted the software’s alignment for images, even with adjusted lighting, exposure and the use of a light tent. Other items were deliberately symmetrical or featureless because they were designed as tools for magic tricks, therefore offering few reference points for marker-based alignment when merging chunks together. We learnt from these less successful attempts, allowing us to mitigate or avoid these issues when dealing with subsequent artefacts.

This early troubleshooting allowed us to develop careful and creative solutions for the artefacts we uploaded to Sketchfab; namely the Silver Ingot, the Metal and Glass Flashlight, the Lead Imitation Figurine and the Rapping Hand. For the Silver Ingot, the model was divided into front and underside chunks, manually merging them using precise marker placement in order to preserve the engravings on the front chunk (above) (direct link). We adapted this method for the Metal and Glass Flashlight, which featured a fractured glass side that distorted the model’s geometry (below) (direct link). To resolve this, we created separate low-resolution chunks for each half of the object, then constructed a clean base geometry onto which smaller high-quality chunks could be accurately aligned.

A micro lens was used to capture the finer surface details of both the Lead Imitation Figurine and the Rapping Hand, allowing for improved clarity on intricate features such as the figurine’s face and the lace of the hand’s velvet cuff. The micro lens also picked up background particles, however, requiring significant manual cleaning in post-processing. In particular, the Rapping Hand required extensive manual and automated processing, including masking, cleaning, and smoothing to address its complex geometry. Due to these processes, and upload constraints, the final model had to be decimated and compressed, which resulted in a noticeable loss of accuracy. By contrast, the Lead Imitation Figurine (not from the Harry Price Archive) benefited considerably from the micro lens, with its surface definition resulting in a highly-detailed 3D model upon upload (below) (direct link).

Gabriel provided us with a detailed workflow document, updated by placement students over the past several years, to which we also contributed our notes, documentation and paradata. This served two crucial purposes: it ensured that mistakes were recorded systematically, highlighting where the process had faltered, and acted as a reference for future digitisation efforts. This workflow offered guidance from previous projects when we met with significant issues, and allowed us to present our own insights which might benefit future teams. By documenting each step in the process, including our failed attempts, we contributed to a growing repository of practical knowledge.

We are immensely proud to have contributed the inaugural items to this new digitisation collection, setting the foundation for expanded digital access to the Harry Price Archive. We hope that these initial models will serve as a stepping-stone for future viewers and contributors to engage with these artefacts remotely, preserving their significance for decades to come. We are deeply grateful to Gabriel Bodard, the Institute of Classical Studies, Tansy Barton, Salvador Alcantara Pelaez and Senate House Library for providing us this incredible opportunity, and we look forward to seeing this digital archive grow in the years ahead.

The models uploaded to the Harry Price Collection on Sketchfab can be found here.

Posted in Teaching | Tagged , , , , | Leave a comment

Digital Classicist London 2025 programme

This year’s Digital Classicist London summer seminar series, hosted by the Institute of Classical Studies, University of London, invited proposals on any aspect of the ancient or pre-colonial worlds, including archaeology, cultural heritage, history, language, literature and their reception, that address innovative digital approaches to research, teaching, dissemination or engagement. We particularly welcomed proposals that integrate research, archival and scientific data across disciplinary or other boundaries.

The organisers (Gabriel Bodard and Elizabeth Koch-Kölük at the ICS, Stephen Kay at the British School in Rome, and Katharine Shields at King’s College London) are delighted to share the programme of seminars, all of which will be streamed live as well as delivered in person in London. The rich variety of presenters, methodologies and subject areas represented in this programme is especially pleasing to us, and helps to showcase both the depth and breadth of this thriving discipline.

All seminars are held at 17:00 BST (UTC+1) on Fridays, live on Youtube and in person in Senate House MakerSpace, room 265, University of London.

Booking recommended for in-person attendance. Not required for online attendance (live or any time after) at Youtube. Use links below.

  1. Friday June 6: Matteo Romanello (University of Zurich) & Charles Pletcher (Tufts University), Introducing Kōdōn, a Minimal Computing Library for Publishing Digital Commentaries (youtube) (register)
  2. Friday June 20: Valentina Lunardi (University of California, Los Angeles) & Barbara McGillivray (King’s College London), Static and contextual embeddings for tracing semantic change: the case of Christian Latin (youtube) (register)
  3. Friday July 11: Thibault Clérice (Inria, Paris), Distributed Text Services for Digital Classics (youtube) (register)
  4. Friday July 18: Ester Salgarella (University of Aarhus), Sort It, See It, Say It. Digital strategies to revive a Bronze Age Aegean Script (youtube) (register)
  5. Friday July 25: Chiara Senatore (La Sapienza, Roma), Digital editions of classical texts for GLAMs (youtube) (register)

We look forward to seeing you there!

Posted in seminar | Leave a comment

Material Digital Humanities seminar 2025

The Material Digital Humanities seminar is organised by Gabriel Bodard (Digital Humanities, University of London, UK) and Chiara Palladino (Department of Classics, Furman University, USA) in 2025. This series presents a range of discussions around materiality and the research possibilities offered by digital methods and approaches. Beyond just the value of digitization and computational research to the study of material culture, we are especially interested in theoretical and digital approaches to the question of materiality itself. We do not restrict ourselves to any period of history or academic discipline, but want to encourage interdisciplinarity and collaborative work, and the valuable exchange of ideas enabled by cross-pollination of languages, areas of history, geography and cultures.

For this year’s seminar, the organisers have arranged a conversation between several of the contributors to a volume that appeared a year and a half ago, on the relationships between materiality and the digital in cultural heritage, and other scholars working in related areas.

Palladino C. & Bodard G. (eds.) 2023. Can’t Touch This: Digital Approaches to Materiality in Cultural Heritage. London: Ubiquity Press. DOI: https://doi.org/10.5334/bcv

As with previous years, this series represents a wider conversation around issues relating to material culture, heritage and materiality and the ways in which digital methods and the discipline of digital humanities engages with them. Even traditional areas of DH focus including text, images and (born-)digital data suffer from ignoring their material and other contexts, be they spatial, chronological, scholarly engagement, ethical and historical settings.

Please do join us and contribute to these discussions!

  1. Wednesday March 5, 2025. Rebecca Kahn (University of Vienna), Skulls, skin and names: The ethics of managing heritage collections data online (Youtube)
  2. Monday March 10, 2025. Dipali Mathur (Ulster University, Belfast), The Limits to Green Growth: Rematerialising UK’s Pathway to Net Zero
  3. Monday March 24, 2025. Chijioke Okorie (University of Pretoria), Digital treatment of African cultural heritage: Shifting landmarks and implications for copyright exceptions for archives (Youtube)
  4. Friday May 2, 2025. Martina Filosa (Universität Köln), Usama Gad (Ain Shams, Cairo), Gabriel Bodard (University of London), Making the Implicit Explicit: Digital editions of ancient text-bearing objects (POSTPONED – will reschedule after summer)
  5. Tuesday May 13, 2025. Rita Lucarelli (Berkeley), Ancient Egypt Reimagined: Teaching about the Past Through Digital Humanities (Registration)
  6. Tuesday June 3, 2025. Francesco Bianchini (Cambridge University), Sanskrit inscriptions as tangible artefacts: exploring digital tools (Registration)

All seminars are free and online only. Please register for the zoom link.

Full details and updates are in the SAS Digital Humanities calendar.

Posted in Events, seminar | Tagged , , | Leave a comment

Online Tools for Handwritten Text Recognition: A Comparative Review of Transkribus and eScriptorium for Byzantine Paleography

written by Konstantina Eleftheriadi, University of Cologne

Introduction

This article investigates online tools for Handwritten Text Recognition (HTR). Specifically, it examines and compares the functionality of the “Transkribus” and “eScriptorium” platforms. Initially, it outlines practical applications of either platform. In the case of Transkribus, a custom Deep-Learning HTR model for Byzantine Greek is developed and evaluated. In the case of eScriptorium, an established model is tested instead, for the same tradition. Through the two experiments, the efficacy of each platform is revealed. Consequently, there is an attempt to compare them briefly, before ultimately venturing to highlight prospective challenges that arise regarding the applicability of such tools for research in the field of digital paleography.

The framework for the endeavor has been provided by the “Sunoikisis Summer 2024: Digital Classics and Byzantine Studies” Consortium, convened by Martina Filosa, Monica Berti, and Gabriel Bodard, for the Academic Summer Semester 2024. Specifically, the fourth session from the aforementioned, on “HTR and OCR from papyrus to codex” and the four proposed exercises.

Online Tools for Handwritten Text Recognition

Manuscripts constitute vital resources for the formation of historical narratives, insofar as they are direct carriers of information concerning past sociocultural circumstances (Saraswati 2013). The study of the material may often be impeded by practical conditions, chief amongst them being the fragility of the objects and their dispersion among different collections and locations. Digitization efforts have significantly increased accessibility to documents, motivating their comprehensive study remotely (Marthot-Santaniello 2021). The development of highly-specialized tools that enable innovative approaches to the new, digital material in academic contexts, has also helped invigorate interest in their study (Jang 2020). “Digital Paleography” has emerged as prolific scientific direction in its own right, garnering due attention from experts.

Transkribus: Creating a custom HTR-Model

The second and third exercises provided instructions for the development and evaluation of Deep-Learning models for Handwritten Text Recognition in Byzantine paleography. Excerpts from manuscripts of the Bodleian Library Collection were supplied in the form of JPEG images, along with transcriptions of their contents, to be used during the training and validation stages. The process was conducted through the “Transkribus” platform, co-developed by the University of Innsbruck (A Short History, 2023). As per the directions of the second exercise, seven pages in total were used in the endeavor. The pages selected originated from the MS. Barocci 102, a Commentary on Isaiah by Basil the Great, compiled around the early 12th c. The model was trained on six pages, then validated and tested on two more.

In the first stage, the model used the training pages and their ground-truth transcriptions as an example, to begin learning to recognize character-sequences in the images. Afterwards, it evaluated and fine-tuned itself by testing its accuracy against the validation-dataset, for which it also had access to transcriptions. Finally, it could attempt to transcribe the text on the test-page, for which it did not have a transcription. The entire process was conducted automatically through the platform, with no further intervention by the user. A preliminary examination of the results on the test-dataset could immediately point to disparities from the text (Fig. 1). Especially concerning was the misinterpretation of word-clusters in areas where the text was densely written, meaning with insufficient spacing between glyphs.

Transkribus: Page transcribed utilizing the created model.

Fig. 1 – Transkribus: Page transcribed utilizing the created model.

The experiment helped further highlight the difficulties cited in literature, concerning the development of functioning models for HTR implementations (Pavlopoulos et al. 2023A). Peculiarities in the handwritten text impeded the recognition and transcription processes, leading to an array of mistakes. Deviations in positioning posed a particular obstacle, as confounded spacing and base-line gravity led to frequent misidentifications and incorrect collation of words. Ligatures and abbreviations were equally challenging, as the model could not separate the individual letters that formed them. As proven by the comparison of common-word instances, stylistic similarities among glyphs also served to confuse the algorithm, as υ (ypsilon) and ν (ny) were often interchangeable. The issues presented could possibly explain the disparity in detected characters, as well as the significant Character Error Rate Percentage (%CER) calculated (Fig. 2, 43.99%).

Transkribus: Presentation of the created model.

Fig. 2 – Transkribus: Presentation of the created model.

eScriptorium: Using a pre-trained HTR-Model

The fourth and final exercise provided a pre-trained model for layout recognition and automatic transcription of Byzantine Greek Handwritten Manuscripts of the 10th c. A concise workflow was outlined in the instructions of the exercise, for the employment of the model in the framework provided by the “eScriptorium” platform, developed by the École Pratique des Hautes Études (Stokes et al. 2021). Though the former may be downloaded locally on Linux or Mac computers, an account on the server of the CREMMA Project was also made available, in order to access the application. Thus, the inability to access the application on a Windows operating-system, could be overcome. The latter option was favored, as it allowed for the processing and storing of information on a cloud server, minimizing the demand on the student’s operating system. For the purposes of the exercise, three pages of the Palatinus graecus 23 from the Bibliotheca Palatina were selected to be examined. An examination of the produced text showed promising results, in terms of fidelity to the original. Overall, the transcription was satisfactory and would require significantly less post-correction, compared to the ones produced previously (Fig. 3).

Figure 3 eScriptorium: Visual comparison of recognized transcription on p. 49 of the Codex Palatinus graecus 23. On the top, the first 8 lines on the page, as segmented and transcribed automatically utilizing the provided model, on eScriptorium. On the bottom, the page as segmented on the digital Bibliotheca Palatina.

Fig. 3 – eScriptorium: Visual comparison of recognized transcription on p. 49 of the Codex Palatinus graecus 23. On the top, the first 8 lines on the page, as segmented and transcribed automatically utilizing the provided model, on eScriptorium. On the bottom, the page as segmented on the digital Bibliotheca Palatina.

Platform Comparison

Transkribus and eScriptorium may be thought of as ultimately accomplishing similar results, as two examples of highly specialized platforms. Both were developed in an academic context and have been regularly employed for robust scientific work. Both were judged as having a user-friendly interface that enabled engagement with their tools with relative ease. The steps that had to be followed for each procedure were clearly communicated through labeled buttons and dedicated panes. Transkribus especially offered extensive instructions in the “Help Page”, which further enhanced the experience. Waiting times were similar, in the range of 15-25 seconds for segmenting, and for transcribing documents.

A major disparity between the applications would relate to actual accessibility. Transkribus is a licensed platform that requires a paid subscription to be utilized to its full potential. Currently, it only operates through a provided server, wherein operations are confined to what is allowed as part of the running payment plan. The API behind the platform is only accessible to collaborating organizations. Conversely, eScriptorium is open-source and may be freely accessed on Linux and Mac operating systems, or through a cloud server. Its full documentation is available online, along with its API. Community support is highly encouraged, in order to consolidate the platform. A final difference between the two is that eScriptorium also allows the export of models trained on the platform, ensuring the shareability of its products. Thus, the results of a scientific work may be replicable in new contexts.

Discussion and Challenges

The digital publication of produced data is steadily becoming a sine qua non for the investigated disciplines (Galleron – Idmhand 2020). Advancements in relevant software have permitted complex interactions with the material in varied environments. Cloud-based operations are becoming more of a norm, limiting the burden on private operating machines. The compounded engagement of experts with the new methodologies has opened up a new epistemological horizon for the disciplines, in the era of “Digital Humanities”. By examining the prospects and limitations of the tools presented, major challenges in the realm of Digital Paleography may also begin to be delineated.

The threat of data obsolescence remains a significant concern for pertinent implementations. Projects that are no longer retrievable or emulable are often encountered, as no measures were taken to ensure their permanence in digital form (Strange et al. 2023). Informed data management strategies should be deliberated over, to ensure that the models developed and the archives they are included in are not only functional in the present, but also replicable in the future. Interoperability may be set as an indispensable condition in publishing related work for posterity, to motivate its continuous refinement through the input of the scientific community. Consequently, “digitization-bias” should be taken into account when interacting with relevant tools. Datasets may be limited to what has been digitized, or simply preserved in accessible digital form, thus affecting the analytical scope of scientific research that heavily relies upon digital databases.

Additionally, the employment of digital tools such as the ones surveyed for research and analysis en masse, could ultimately result in the “black boxing” of scientific knowledge. In the field of Digital Humanities, the former term is understood as a process in which the over-dependance on software to perform complex queries contributes to limiting the user’s understanding of essential background operations (Stokes 2009). Procedures in digital environments often require a sequence of operations in advanced mathematics and statistics to be carried out, as vital components that make up the deep learning algorithms enlisted. Simultaneously, an efficient tool that produces a desired result, may not motivate scrutiny from the researcher, as it serves its purpose successfully without intervention. When relying primarily on applications that simply accept input and produce an output with little clarification, those same procedures become obscured to the user, who cannot effectively re-apply them without fear of loss of scientific integrity.

Regarding the problems presented, a viable solution has emerged in the form of extensive collaborations. When dealing with such heterogenous structures, it would be expected to face an array of complications, which can be alleviated through interdisciplinary cooperation. Indeed, the implication of experts from varied backgrounds has been common-practice in related projects. Computer scientists and data analysts have often aided the work of philologists. In a reciprocal manner, philologists have motivated pioneering approaches in the aforementioned areas. Both sides have sought to expand their knowledge and toolshed as well, by specializing in digital applications for the humanities (Galleron – Idmhand 2020).

Shareability of data and analytical processes has become another central objective in modern academia. Transparency concerning tools utilized, problems encountered, and error-correcting methodology favored, additionally strengthens relevant efforts (Joyeux-Prunel 2024). By adhering to protocols that promote open-access and collaborative practices, a fertile environment for the proliferation of the disciplines may be formulated presently, and for the future. As tools become more powerful, and experts gain more knowledge through the exchange of ideas, new directions for even more creative analyses may surface. Ensuring the explainability and reproducibility of published methods motivates participatory values among the scientific community, which aim at refining the introduced epistemology. The consistent reference to the FAIR Principles (Fair Principles, 2022) for the publication of digital data, serves to exemplify that point.

This scientific paper was supported by the Onassis Foundation – Scholarship ID: F ΖT 020-1/2023-2024.
Posted in SunoikisisDC, Tools | Tagged , , , , , , , , | Leave a comment

Digital Approaches to Cultural Heritage 2024 Collaborative Reading List

The students attending my masters-level class on Digital Approaches to Cultural Heritage in Spring 2024 were drawn from the London Intercollegiate Classics programme (students in Classics, Ancient History, Classical Art and Archaeology, Late Antique and Byzantine Studies, and Reception of the Classical World from King’s College London, Royal Holloway University of London and University College London) plus a few Institute of Classical Studies doctoral students auditing the module. The course was taught with the support of the Sunoikisis Digital Classics online programme, with common sessions on Youtube, and ten in-person seminars and practical sessions hosted in the Senate House MakerSpace.

To complement the readings provided through SunoikisisDC, we  invited each student to contribute one suggested “reading” per week, ideally open access but not necessarily formally published articles (blog posts, videos, news reports, etc. were welcome) on the subject of the course: the intersection of digital humanities and the study of antiquity and heritage. I borrowed the idea for this collaborative reading list from my colleague Emlyn Dodd, and was delighted with the gusto and sensitivity with which the students took to it. The results of this year’s collaboration are reproduced below, including brief comments from the student who contributed each item (with the explicit permission of all students), with thanks and kudos to all.

The brief:

Please add below here at least one example per week of a reference you have found that is relevant to this course and might be of interest to the others. This need not be a formal publication, but can also include blog posts, videos or podcasts, wiki pages, websites, online editions or tools, etc. Explain briefly in the message why you found it useful, and be prepared to share it with the class in the Tuesday seminar.

Maxime Guénette:

Elliott, Tom. ‘Epigraphy and Digital Resources’. In The Oxford Handbook of Roman Epigraphy, edited by Christer Bruun and Jonathan Edmondson, 78–86. Oxford: Oxford University Press, 2014. https://doi.org/10.1093/oxfordhb/9780195336467.013.005.

It is a fairly short chapter of the The Oxford Handbook of Roman Epigraphy. Tom Elliot introduces the reader to several digital epigraphic databases and editions, journals and references that are useful for further reading, as well as initiatives by associations like EAGLE in epigraphy. It is an excellent introduction to digital epigraphy for students or other scholars keen to learn more about digital humanities in general.

Euan Bowman:

Novotny, J. and K. Radner, ‘Official Inscriptions of the Middle East in Antiquity: Online Text Corpora and Map Interface’. in Crossing Experiences in Digital Epigraphy, eds. A. De Santis and I. Rossi, 141-153. Berlin: De Gruyter, 2018.

This is a pretty brief overview of what the OIMEA (Official Inscriptions of the Middle East) project is and what it can offer scholars and those with an interest in the cultures of (primarily) ancient Mesopotamia. Most of the inscriptions they have included so far are in Akkadian and Sumerian, although they also intend to add Old Persian and Aramaic. I thought it would be useful to include here as it highlights some other benefits and difficulties found with digitised epigraphic databases (such as the difficulty of handling multilingual translations) that were not highlighted by the set reading.

David Roots:

Codex Sinaiticus: https://codexsinaiticus.org/en/

Not an article, just a digital project, which I have come across in my course which I found interesting. The earliest surviving codex of the bible from the 4th century, this is in Greek. Website allows you to examine the original manuscript and to download some text in html I believe.

Maxime Guénette:

https://ercolano.beniculturali.it/herculaneum3dscan/

Digital project called Herculaneum 3D Scan. It gives the occasion to scholars, cultural heritage specialists and the wider public to virtually visit several roman houses in Herculaneum. One of the nice aspect of this project is that you can download the 3D models and also calculate distance, volume and other things with webGL

Elizabeth Koch-Kölük:

https://ijcs.ro/public/IJCS-17-04_Carlan.pdf

Article from the International Journal of Conservation Science (Vol.8; Issue I-Jan-March 2017:35-42) describes how close-range photogrammetry was used to produce a 3D Model of ‘Arutel Roman Castrum’. Agisoft Photscan and Netfabb (for mesh editing) were applied to 2,200 photographs (all taken at ground level) and 270 m of walls were digitally modelled. The first author is from the Factulty of Geography in the University of Bucharest and the second author is a partner in a commercial company specializing in photogrammetry and GIS.

David Roots:

https://sketchfab.com/rochestercathedral and/or https://www.rochestercathedral.org/virtual

This is a link to photogrammetry scans of Rochester Cathedral, made by the Rochester Cathedral research guild (https://www.rochestercathedral.org/researchguild). I found this to be useful in my first year presentation on the cathedral, especially as it was during a covid lockdown and I could not visit the cathedral in person. In this way anyone including academics across the world can experience the cathedral’s architecture.

Euan Bowman:

Karagkounis, D, and S. Tsanaktsidou, ‘The Restoration of the Main Theatre of the First Ancient Theatre of Larissa, Greece, Assisted by 3D Technologies’. in Transdisciplinary Multispectral Modeling and Cooperation for the Preservation of Cultural Heritage, edited by A. Moropoulou; A. Georgopoulos, 13-23. Switzerland: Springer.

Interesting chapter on how 3D modelling/mapping can be used in the reconstruction of ancient sites (specifically a Hellenistic theatre in Larissa). I included it since Thessaly is one of those regions that has not received much attention archaeologically so I was interested to see how 3D technologies had been used here. It also gives a different view from assigned readings about the utility of creating 3D models, i.e. to aid in restoration efforts.

Noam Mendzelevski:

Leore Grosman, Avshalom Karasik, Ortal Harush, & Uzy Smilanksy. (2014). Archaeology in Three Dimensions: Computer-Based Methods in Archaeological Research. Journal of Eastern Mediterranean Archaeology & Heritage Studies, 2(1), 48–64. https://doi.org/10.5325/jeasmedarcherstu.2.1.0048

A good overview of the use of photogrammetry as part of the archaeological investigation, summarising the process of making digital 3D models before highlighting the numerous applications of such; including – and particularly usefully – how digital 3D models, through the development of new computer algorithms, can reveal historical answers from material remains that we would otherwise struggle or be unable to provide.

Maxime Guénette:

Weiland, Jon. 2021. Review: Pleiades. https://classicalstudies.org/scs-blog/jon-weiland/review-pleiades

Blog reviewing the digital gazetteer Pleiades, the leading geographical gazetteer in Digital Classics. The author presents a summary of the tool and its features, its growth, as well as strengths and weaknesses. It is a very good introduction on how Pleiades should be used and its importance in Digital Classics.

David Roots:

Ryan Horne (2020) Beyond lists: digital gazetteers and digital history, The Historian, 82:1, 37-50, DOI: 10.1080/00182370.2020.1725992

This arrival discusses the advantages brought by digital Gazetters over physical ones, in that they have no limitations on the display and number of information they can provide, and how they can be connected by using a standard URI and through Recongito, added to other documents, as discussed in the lecture.

Elizabeth Koch-Kölük:

Foka, A., McMeekin D. A., Konstantinidou, K., Mostofian, N., Barker, E., Demiroglu, O. C., Chiew, E., Kiesling B. and Talatas L. 2021. Mapping Ancient Heritage Narratives with Digital Tools. In: Champion, E. M. (ed.) Virtual Heritage: A Guide. Pp. 55–65. London: Ubiquity Press. DOI: https://doi.org/10.5334/bck.f. License: CC-BY-NC

This book chapter examines the 2nd c. CE Periegesis Hellados (Description of Greece) by Pausanias. The narrative covers journeys between sites in Greece and exceptional objects found there. The chapter discusses the use of Recogito and the issue of ‘teasing apart’ movement, memory and space and how they interrelate. The authors used collection of gazetteers such as Pleiades, Topstext, Judith Binders Art History and that of the German Archaeological Institute (DAI).

Noam Mendzelevski:

Grossner, K., Janowicz, K., & Keßler, C. (2016). Place, Period, and Setting for Linked Data Gazetteer. In M. L. Berman, R. Mostern, & H. Southall (Eds.), Placing Names: Enriching and Integrating Gazetteers (pp. 80–96). Indiana University Press. https://doi.org/10.2307/j.ctt2005zq7.11

The article explores how the methodological framework of Linked Data (i.e., the combination of multiple reference frameworks in a standardised ontological pattern) can enable the construction of comprehensive digital historical gazetteers which meet criteria of extensibility, multivocality, integration, and sustainability. It also further delineates additional requirements for comprehensiveness, such as the need for inclusion of temporal components alongside spatial components.

Elizabeth Koch-Kölük:

Mafredas, Thomas & Malaperdas, George. (2021). Archaeological Databases and GIS: Working with Databases. European Journal of Information Technologies and Computer Science. 1. 1-6. 10.24018/compute.2021.1.3.20.

DOI :10.24018/ejcompute.2021.1.3.20

This article focuses on how a database can provide a detailed record of archaeological excavations and a DB’s subsequent use in GIS research. The authors list the advantages of structuring data in a database environment, which attribute data should be selected and, how data entry becomes more efficient and more accurate when using handheld devices on site. It is also noted that updating and changing the database should be considered when compiling it. In the second part of the article an excavation project (Bethsaida, Israel) is briefly outlined. It describes how its database of the material finds (coins, Glas, metal, ceramics) was incorporated into the GIS research. The aim was to map geographical locations of different material objects in their respective historical time layers.

 Euan Bowman:

Parlıtı, U. (2021) ‘An Evaluation on Eastern Anatolia Late Iron Age
(Persian/Achaemenid Period)’. Nisan 1: 107-120.

https://www.academia.edu/49043546/An_Evaluation_on_Eastern_Anatolia_Late_Iron_Age_Persian_Achaemenid_Period

I added this article since I thought it would be interesting to see how GIS is used outside of articles with GIS as their primary focus. It uses GIS minimally, but still in an interesting and useful way. The article is a survey and reassessment of available archaeological data in Achaemenid Armenia. They use GIS to make a map that pinpoints each important location that is discussed in the article. This is a very useful application of this approuch since the article serves as an introduction to the available evidence in this field. People, such as myself, who dont know a lot about this Achaemenid satrapy are therefore not left in the dark about where everything is that the author is talking about. This is especially helpful for people studying the Achaemenid Empire since you do need to bounce around different regions to understand how the wider imperial administration works even if you focus on one area, so making the article very accessible to non-specialists on ancient Armenia is very helpful!

Noam Mendzelevski:

Archaeology Data Service, n.d. ‘Guides to Good Practice.’

https://archaeologydataservice.ac.uk/help-guidance/guides-to-good-practice/

The Guides to Good Practice, created jointly by the UK Archaeology Data Service (ADS) and Digital Antiquity in the US, are a series of digital archaeology guides seeking to provide a basis for archaeological workflows that will produce shareable, archivable digital data. They include a comprehensive and detailed array of information packets on topics such as basic components of digital archaeology, types of data collection and fieldwork, and methods of data analysis and visualisation; the lattermost of which includes the applications and current issues of GIS use, how to create, use and archive GIS datasets, and more.

David Roots:

Ruggeri, F, Crapper, M, Snyder, JR & Crow, J 2017, ‘A GIS-based assessment of the Byzantine water
supply system of Constantinople’, Water Supply, vol. 17, no. 6, pp. 1534-1543.
https://doi.org/10.2166/ws.2017.062

This article examines the 4th and 5th century aqueduct of Constantinople using GIS, which uses GPS data of the various aqueducts but also included a Digital elevation model using an ‘Advanced Spaceborne Thermal Emission and Reflection Radiometer’
9 (ASTER) GDEM V2. In conclusion they were able to validate earlier research of Synder(2013) who had estimated the Fourth-century line to be 267,880 vs the studies 245, 913. The study found that in total (if the lines ran in parallel) of an astonishing 565km and a gradient of 5m/km to 0.5m/km.

Maxime Guénette:

Parcero-Oubina, C, Smart, C and Fonte, J. 2023. Remote Sensing and GIS Modelling of Roman Roads in South West Britain. Journal of Computer Applications in Archaeology, 6(1): 62–78. DOI: https://doi.org/10.5334/jcaa.109

This article presents a recent study on a more than likely road network system in the southwest of Roman Britain. This region is far less studied than other parts of England because of the scarcity of evidences. The authors used LiDAR from national surveys to build a GIS database and used this database for GIS spatial analysis of possible roads between settlements, possible roman sites and Roman forts. To overcome the downfalls of an exclusive least cost paths analysis, they also used MADO and CMTC.

Elizabeth Koch-Kölük:

https://www.forbes.com/sites/drsarahbond/2016/09/22/does-nycs-new-3d-printed-palmyra-arch-celebrate-syria-or-just-engage-in-digital-colonialism/#

This article (2016) treats the 3D model of the Roman arch (dated to reign of Septimius Severus/193-211 CE) in Palmyra that was destroyed by ISIS in October 2015. The article notes that cultural appropriation of the town’s ruins occurred before the 21st century in the seal of the USA and the ceiling of the Freer-Sackler galleries.
The 3D model of the arch was originally created by the UK Institute for Digital Archaeology (IDA) and displayed in Trafalgar Square. Later it moved to NY where the Deputy Major proclaimed it an act of solidarity with the people of Syria. While the text acknowledges this aspect, it also points out that the IDA has not sufficiently revealed any ties with the Syrian communities. Furthermore, it speaks of “digital colonialism” with its in-built danger of creating institutes retaining the copyright for the models and limiting open access. The article ends by quoting the director of technology of the IDA who at the time promised open access to the data file of the arch.

Maxime Guénette:

Timofan, Anca, Călin Șuteu, Radu Ota, George Bounegru, Ilie Lascu, Radu Ciobanu, Dan Anghel, Cătălin Pavel, and Daniela Burnete. “PANTHEON 3D. An Initiative in the Three-Dimensional Digitization of Romanian Cultural Heritage.” Studia Universitatis Babeș-Bolyai Digitalia 63, no. 2 (March 15, 2019): 65–83. https://doi.org/10.24193/subbdigitalia.2018.2.05.

This paper talks about the cool stuff happening with Romanian history and tech. The National Museum of Unification in Alba Iulia is leading a big project called Pantheon 3D. They’re using fancy 3D scanning and modeling to make virtual versions of ancient Roman stuff. The goal is to create a digital collection and a cool website where you can explore it all. They want to use these digital goodies in museums, schools, and research to make learning about history more awesome. By teaming up with other groups, they’re hoping to find new ways to save and appreciate our shared heritage. This paper says that using 3D tech like this is super important for studying old stuff, and Pantheon 3D could be a game-changer for preserving our history.

 Euan Bowman:

Hepworth, K. And Church, C. (2018). ‘Racism in the Machine: Visualization Ethics in Digital Humanities’. Digital Humanities Quarterly 12.

An interesting paper that picks up on some of what we have been talking about in class previously. Talks about how human biases and prejudice can become reflected in digital humanities projects and considers the ethical dimensions of this. Uses digital maps of historic lynchings in the US as case studies. Shows how visual presentation of maps are inextricably linked to political views, I.e. whether state boundaries are shown or not implies a different view of lynchings as either a state specific or national problem. On maps that give an option to toggle state boundaries, the implication is that it is both.

Noam Mendzelevski:
Khunti, R., 2018. ‘The Problem with Printing Palmyra’. URL: https://scholarworks.iu.edu/journals/index.php/sdh/article/view/24590/32535.

The document analyses the ethical issues surrounding the reconstruction of Palmyra’s Arch of Triumph, which was destroyed by ISIS in 2015. The Institute for Digital Archaeology (IDA) used 3D printing to create replicas of the arch in New York, London, and Dubai. The author argues that the reconstruction failed to meet basic ethical standards in key ways, and that basic ethical principles that apply to preservation and display of original heritage sites should also govern digital reconstructions.

David Roots:

https://wikimedia.org.uk/2018/03/data-on-the-history-of-scottish-witch-trials-added-to-wikidata/

A blog post sort of thing from Wikimedia Uk’s website, looking at a project from 2017 which uploaded the university of Edinburgh’s data on witch trials which had been sat in the cold cauldron of the university’s Microsoft Access database for a decade. Some 45 Design informatic Masters students uploaded the data to wikidata and produced various visualisations of the data.

Maxime Guénette:

Zhao, Fudie. ‘A Systematic Review of Wikidata in Digital Humanities Projects’. Digital Scholarship in the Humanities 38, no. 2 (2023): 852–74. https://doi.org/10.1093/llc/fqac083.

The aim of the systematic review was to identify and evaluate how Wikidata is perceived and utilized in Digital Humanities (DH) projects, as well as its potential and challenges as demonstrated through use.

The paper found that:

  1. Wikidata is commonly used in DH projects as a content provider, a platform, and a technology stack.
  2. It is often implemented for annotation and enrichment, metadata curation, knowledge modelling, and Named Entity Recognition (NER).
  3. Most projects tend to use data from Wikidata, but there is potential to utilize it as a platform and technology stack for publishing data or creating a data exchange ecosystem.
  4. Projects face two types of challenges: technical issues in implementation and concerns with Wikidata’s data quality.
Elizabeth Koch-Kölük:

Ford, H., & Iliadis, A. (2023). Wikidata as Semantic Infrastructure: Knowledge Representation, Data Labor, and Truth in a More-Than-Technical Project. Social Media + Society, 9(3).
https://doi.org/10.1177/20563051231195552

This article highlights how online platforms (google, alexa, amazon etc.) increasingly use Wikidata as a ‘critical architecture’ that links data throughout the world. The authors believe that Wikidata is a critical mediator of truth and thus has significant social and political implications. On a critical note they claim that due to the dissemination in discreet bits of information, these facts are not always anchored to their original references. They propose classifying Wikidata as a semantic infrastructure so that questions about its impact are more easily formulated. One of the challenges they note is Wikidata’s development as a public goods versus its use by downstream platforms that utilise its facts without credit. They end by highlighting two areas for future research; the study of knowledge automation/dissemination in an environment with few dominate commercial players and secondly, whether Wikidata is an alternative to ‘exploitive platform capitalism’ in the production of public knowledge goods.

Noam Mendzelevski:

Sengul-Jones, M., 2021. ‘The promise of Wikidata.’ DataJournalism.com. URL: https://datajournalism.com/read/longreads/the-promise-of-wikidata
The article discusses how data journalists can use Wikidata as a linked open knowledge base in their work, highlighting its upsides and potential downsides. It makes mentions of Wikidata’s uses as, amongst other things, a shortcut between different language Wikipedia versions, in doing large-scale data querying with the multilingual API (though with caution around nuance), and for discovering relationships by querying the knowledge base. Critically though, the article also warns of its limitations such as inconsistencies, biases, and lack of coverage that journalists should be aware of when using the data. In all, the article highlights and raises awareness for the considerable benefits that the usage of Wikidata provides to modern digital journalists while also cautioning responsible usage.

Euan Bowman:

Isaksen, L., Simon, R., Barker, E., and P. de Soto Canamares. ‘Pelagios and the Emerging Graph of Ancient World Data’. In: WebSci ’14: Proceedings of the 2014 ACM conference on Web science, ACM, pp. 197–201.

https://oro.open.ac.uk/43658/1/2014_Isaksen_Barker_etal_Pelagios_WebSci.pdf

Paper talks about the growth of the Linked Open Data Project GAWD (Graph of Ancient World Data. It explains for example that it uses simple ontologies such as Open annotation that makes it easier to adopt for people working in humanities who usually may not have a back ground in digital approaches. Also argues that there is still a lot of work to be done and offers several steps to be taken to integrate it.

Elizabeth Koch-Kölük:

Claire S., “3D Printing & Intellectual Property: Are the Laws fit for Purpose”; In 3DNatives, July 3rd 2023
https://www.3dnatives.com/en/3d-printing-and-intellectual-property-are-the-laws-fit-for-purpose-150320235/

This website news article discusses the laws pertaining to 3D Printing and IP. It highlights the legal definitions pertinent to the field, the parties from 3D printing hobbyists to multinationals who seek to guard their discoveries with patents and trademarks. Different stakeholders are affected differently by the laws and may face different repercussions. Initially definitions for copyright, patents, trade secrets, trademarks, copyright law and 3D printing are outlined. The article also highlights the use of NDAs demanded of employees in 3D printing companies. There are movements that are fighting for change to the IP laws, such as Creative Commons that are trying to reform copyright laws so that the public has easier access to culture and knowledge. The article ends by saying that legal rulings tend to favour larger companies and may disfavour open creativity. It lists some individuals and organisations that are fighting for more open access to creative works and a modification of copyright laws. But, overall the IP laws that are applicable to 3D are complex and look set to become more nuanced and important in the immediate future.

The website has been in existence since 2013 and was acquired by the American Society of Plastics Engineers (SPE) in 2023. Its mission is to bring the “wonders of 3D printing to as many people as possible.

David Roots:

https://www.gla.ac.uk/news/headline_1012014_en.html

A short news article which demonstrates the application of XR in museums/cultural heritage. The university of Glasgow through its project “The Museums in the Metaverse project” aims to create a virtual platform where any user with a vr headset can interact online with a Museums entire collection – which not only gives the public a new experience, but opens the door for academic study, as apparently about 90% of a museums collections are held in storage.

 Euan Bowman:

Kantaros, A., Soulis, E. and E. Alysandratou, (2023) ‘Digitization of Ancient Artefacts and Fabrication of Sustainable 3D-Printed Replicas for Intended Use by Visitors with Disabilities: The Case of Piraeus Archaeological Museum’. Sustainability 15: 1-18.

https://www.mdpi.com/2071-1050/15/17/12689

Article that touches on one of the topics that came up in the Thursday seminar. Specifically, it gives one answer to what 3D modelling and printing can be useful for. It uses 2 statues recovered from Piraeus as case studies for how these technologies can enhance learning for people who are blind or visually impaired. Article argues that 3D printed copies will allow people to interact with these artefacts through physical touch and therefore give them a clearer sense of the objects. The authors also argue that 3d printing and modelling have a lot of potential ‘to be purposefully crafted and modified in order to cater to a diverse range of disabilities, including but not limited to visual impairments, hearing impairments, and mobility limitation’.

Maxime Guénette:

Loder, William. ‘Designing Digital Antiquity: Classical Archaeology in New Virtual Applications’. M.A. dissertation, University of Arkansas, 2021. https://scholarworks.uark.edu/etd/4251.

In this thesis, William Loder explores the integration of archaeological theory with game design principles to create immersive 3D applications of ancient sites in virtual reality. By combining concepts from phenomenology and sensory studies with game design theory, Loder aims to enhance interactive education and research opportunities in classical archaeology. The focus is on developing a Virtual Roman Retail Project (VRR) application that allows users to explore a reconstructed shop scene in Pompeii, engaging with historical contexts and material data through interactive experiences. The methodology involves a blend of photogrammetry and 3D modeling techniques to accurately recreate the ancient environment. Loder emphasizes the importance of embodiment and presence in virtual landscapes for a more immersive learning experience. The thesis also discusses the challenges of digital reconstruction in archaeology, highlighting the need for constant interpretation and iteration to ensure accuracy and relevance in research.

Maxime Guénette:

Muenster, Sander. ‘Digital 3D Technologies for Humanities Research and Education: An Overview’. Applied Sciences 12, no. 5 (25 February 2022): 2426. https://doi.org/10.3390/app12052426.

The article explores the use of digital 3D technologies in humanities research and education. It covers key concepts, such as methodological settings, scientific communities, workflows, technologies for creating and visualizing 3D models, documentation standards, and framework conditions. The study investigates the impact of 3D technologies on cultural heritage studies and discusses funding sources, ethical considerations, and acknowledgements. Additionally, it highlights the importance of collaborations and networking in advancing research in this field.

Noam Mendzelevski:

Barratt, R. P. (2021). Speculating the Past: 3D Reconstruction in Archaeology. In E. M. Champion (Ed.), Virtual Heritage: A Guide (pp. 13–24). Ubiquity Press. http://www.jstor.org/stable/j.ctv2dt5m8g.6

The chapter comprehensively discusses the main uses, methods, and issues of 3D reconstructions in archaeology. It offers a balanced perspective on the advantages (e.g. new opportunities for archaeological interpretation) and limitations (e.g. indistinguishable hyperrealism) of 3D reconstructions, while also presenting practical solutions to address the issue of inaccuracy in 3D models. The author’s discussion of games and their ability to create immersive learning experiences is especially relevant for those seeking to engage the public with archaeological heritage.

Elizabeth Koch-Kölük:

Götz, S. (2024), virtual 3D Reconstruction of Ancient Architecture in the Ostia Forum Project in Ostia Forum Projekt Website
https://ostiaforumproject.com/virtual-3d-reconstruction-of-ancient-architecture-in-the-ostia-forum-project/

This article describes how the Ostia Forum Project which is run by the Stiftung Humboldt-Universität has been working with the structure-from-motion (SfM) method. In 2020 the project began with creating virtual 3D reconstructions of various construction phases, fragments, altars, temples and the plaza of ancient Ostia. The aim was to include the possibilities of examining lines of sight, walkways and incidence of light. It has merged the 3d reconstructions based on polygons with models from 3D photogrammetry. This article provides the current reconstruction of the the Temple of Roma and Augustus in the Forum. The result is a walk-in 3D model that can be explored and compared with other buildings. Additionally, light effects during the day and throughout the year have also been included.

David Roots:

P. Lassandro, M. Lepore, A. Paribeni, M. Zonno, 3D MODELLING AND MEDIEVAL LIGHTING RECONSTRUCTION
FOR RUPESTRIAN CHURCHES, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W9, 2019
8th Intl. Workshop 3D-ARCH “3D Virtual Reconstruction and Visualization of Complex Architectures”, 6–8 February 2019, Bergamo, Italy

https://isprs-archives.copernicus.org/articles/XLII-2-W9/417/2019/isprs-archives-XLII-2-W9-417-2019.pdf

This article examines the use of photogrammetry and laser scanning as techniques to create 3D models of byzantine churches in Italy to examine the systems of illumination of the buildings in the medieval times.

 Euan Bowman:

G. Kontogianni, A. Georgopoulos, N. Saraga, E. Alexandraki, K. Tsogka, ‘3D VIRTUAL RECONSTRUCTION OF THE MIDDLE STOA IN THE ATHENS ANCIENT AGORA’. The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences.

https://www.academia.edu/3303466/3D_VIRTUAL_RECONSTRUCTION_OF_THE_MIDDLE_STOA_IN_THE_ATHENS_ANCIENT_AGORA

Article talks about the process of reconstructing the Middle Stoa from the Athenian Agora as a 3D model. Data for reconstruction was ranked according to accuracy before implemented. For example, previous 3D models and other modern reconstructions were considered highly accurate while travellers’ reports were seen as less so. At times, the authors may have leaned on the supposed accuracy of modern models a little too much. When describing how they applied colours to their version of the building, they claimed that they only used one modern reconstruction as evidence. However, they don’t elaborate on what evidence was used by this previous study or what the previous authors’ assumptions were. It therefore felt that we just had to take their word for it.

 Euan Bowman:

Horne, R. (2020). Mapping Power: Using HGIS and Linked Open Data to Study Ancient Greek Garrison Communities. In: Travis, C., Ludlow, F., Gyuris, F. (eds) Historical Geography, GIScience and Textual Analysis. Historical Geography and Geosciences. Springer, Cham.

https://doi.org/10.1007/978-3-030-37569-0_13

The chapter talks about how developments in LOD have helped make new advancements in the study of ancient Greek garrisons. The author makes a gazetteer that shows every Greek garrison throughout antiquity. The chapter argues that this would have been impossible before with LOD since many humanities data sets are derived from textual sources which offer little geospatial data, therefore complicating the use of GIS. However, after aligning data with information on Pleiades, a complete overview of Greek garrisons was possible. After making this gazetteer, it was found that garrison commanders were usually only ever found in areas peripheral to ruling powers and never in imperial capitals.

Elizabeth Koch-Kölük:

Rantala, H., Ikkala, E., Koho, M., Tuominen, J., Rohiola, V., & Hyvonen, E. (2021). Using FindSampo Linked Open Data Service and Portal for Spatio-temporal Data Analysis of Archaeological Finds in Digital Humanities. CEUR Workshop Proceedings, 2980. http://ceur-ws.org/Vol-2980/paper330.pdf

This paper talks about FindSampo(a LOD service and semantic portal) that is based on Finnish Citizen Science archaeological data and has been in use since May 2021. It is a collaboration between the public, archaeologists and heritage managers in Finland. Currently there are more than 3,000 archaeological finds made by the public in the data service, which can be searched and results can be visualised in maps, charts and a timeline. Most finds are derived from metal detecting and therefore their visualisations also provide information on further places that are recommended for detecting and those that should be avoided. Querying data is carried out with SPARQL. In future, the framework will be adapted for international archaeological finds as well.

Maxime Guénette:

Blaney, Jonathan. ‘Introduction to the Principles of Linked Open Data’. Programming Historian 6 (2017). https://doi.org/10.46430/phen0068.

This is a lesson from Programming Historian, which is a good introduction to the main aspects of Linked Open Data like URI, RDF, SPARQL, etc. It explains the several advantages of this technology of Semantic Web, but also some of the pitfalls and difficulties. The lesson then uses Linked Open Data through examples to demonstrates how academics can publish or use this techonology for their own research.

Posted in Open Access, SunoikisisDC, Teaching | Tagged , , | Leave a comment

Digital Humanities student placement in Institute of Classical Studies 2024

Guest post by Wei Hei Nip

As part of the MSc Digital Humanities programme at UCL, I had the opportunity to contribute to the ongoing project on 3D digitisation and documentation of the Ehrenberg Bequest at the Institute of Classical Studies between 29th April and 17th May 2024. As a former student of Ancient History and Archaeology, and currently pursuing my Masters in Digital Humanities, this placement provided me with a unique opportunity to apply my skills and knowledge from DH in archaeological settings.

The Ehrenberg Bequest is a collection of over 150 small antiquities, mostly Greek and Roman ceramics, bequeathed to the ICS by Dr. Victor Ehrenberg in 1976. The collection is currently being used in teaching and training of 3D methods involving students, workshop participants, internship or placements. The 3D digitization works involve producing 3D models of the collection items and to document the imaging and modelling process.

The Senate House MakerSpace is the shared space for digitisation and experimentation located in the Senate House. It focuses on the experiment and application of 3D technologies. The Digital Humanities Research Hub may sometimes organise training and workshops at the MakerSpace to help students and researchers think about the use of 3D technologies in their works. My work placement mainly takes place within the MakerSpace.

Setup of photogrammetry studio at the MakerSpace

Tasks

1. Digitisation

One of my main tasks during the placement was to digitise the artefacts from the Ehrenberg Collection. This involved photogrammetry techniques and equipment to capture 3D images of the artefacts. The process involves taking photos of the artefact at different angles and processing the photos using photogrammetry software, Agisoft Metashape Pro. More on the 3D modelling workflow can be found in the blog posts by Gabriel Bodard (see here) and Barbara Roberts (see here).

The training is useful and informative, where an elaborate list of videos and hands-on practice are provided. Following the training provided, I selected a small selection of objects to work on. I carefully documented the process of photogrammetry and processed the captured images to create accurate 3D models. The finished 3D models are uploaded onto ICS’s Ehrenberg Collection SketchFab account, where they are publicly accessible.

The Ehrenberg Bequest offers a variety of artefacts to digitise, such as Roman lamps and fragments of pottery. Here’s an example of my digitisation.

Cypriot white-painted ware aryballos Ehrenberg collection catalogue no. 43 (see here)

Cypriot white-painted ware aryballos Ehrenberg collection catalogue no. 43 (see here)

2. Documentation

Besides digitising the artefacts, I also helped with the documentation and cataloguing processes. I recorded all the necessary information about the digitised objects mainly in a designated Word document. This involved noting down details such as object descriptions, 3D modelling configurations and processes.

In addition to the Word document, I attempted to document the 3D digitisation using the Digital Lab Notebook created by Cultural Heritage Imaging (Chi). It is an open source software for collecting and managing metadata about computational photography projects. Users can enter information such as, metadata of the imaging subjects, equipment, imaging methodology, related locations, people and documents.

However, challenges were encountered during the data input process in the DLN. For instance, the configurations are sometimes not applicable to the projects. Further works and planning are needed to fully utilise the software for the project’s documentation.

I also organised the catalogue spreadsheet, cross-referencing it with the digitised objects to ensure that there were no missing records. This work helped in maintaining a comprehensive and accurate inventory of the collection. However, due to the condition of storage of the Ehrenberg collection, the catalogue number of the artefacts may have been lost. Hence, some of the artefacts are unable to be identified from the catalogue spreadsheet. Further work may have to be done to better organise and catalogue the collection.

My Learning

Through this placement, I gained hands-on experience in 3D digitisation, image processing, and model creation. This experience reinforced my understanding of how digital technologies can be utilised for cultural heritage preservation and accessibility. I am also grateful that I learnt how to make 3D models using photogrammetry despite having no prior experience at all.

The placement also let me work in a self-directed manner, allowing me to organise my time and work independently. It also gave me the chance to engage with the people at the Digital Humanities Research Hub and the ICS, and understand how digital humanities are applied to their work.

I also get to participate in activities at the MakerSpace. The Analog 3D Printing workshop is particularly fun, where we create objects with clay as a hands-on discussion on how ancient technologies or objects were made.

Products from the analog 3D printing workshop

By the end of the placement, I also get to 3D print a digitised object from the ICS Ehrenberg Collection and bring home as a souvenir. It is printed with the Ultimaker 3 Extended 3D printer equipped in the MakerSpace.

Posted in Teaching | Tagged , , , , | Leave a comment

Digital Classicist London 2024 programme

The programme for Digital Classicist London seminar 2024 is now up. The seminar takes place on Fridays in late June and throughout July. Two seminars are online only; three take place in the Senate House MakerSpace and are also live-streamed. YouTube links for all are included below. All seminars are at 17:00 BST except for July 5, which starts half an hour later. Advancing booking is required for in-person seminars.

  • Fri, June 28, 2024. Jaclyn Neel (Carleton University), Rome from the Ground Up: Romulus and Remus on YouTube. (Online only) (Youtube)
  • Fri, July 5, 2024: 17:30. Chiara Palladino (Furman University), Representing ancient landscapes digitally: the what, the how, the why. (Senate House MakerSpace [book]) (Youtube)
  • Fri, July 12, 2024. Anna Conser (University of Cincinnati), Pitch Accents as Melodic Data in Ancient Greek Texts: Digital Tools for Close and Distant Reading. (Online only) (Youtube)
  • Fri, July 19, 2024. Naomi Scott (University of Bristol), Crowdsourcing a translation of Julius Pollux’s Onomasticon: methods and problems. (Senate House MakerSpace [book]) (Youtube)
  • Fri, July 26, 2024. Valentina Iannace (University of Florence), Matching fragments from the Tebtynis Temple Library “deposit”: preliminary work to develop an AI software, with some examples of Greek documents. (Senate House MakerSpace [book]) (Youtube)

Organised by Katharine Shields (KCL) and Gabriel Bodard (ICS).

Posted in seminar | Leave a comment

Data Driven Classics (London, July 5, 2024)

This free one-day workshop looks like it would be particularly valuale for classicists (within reach of London) with no previous experience of working with digital datasets or corpora, who want a good hands-on and theorertical introduction to the possibilities.

(Forwarded for Andrea Farina.)

CALL FOR PARTICIPANTS – DATA DRIVEN CLASSICS: EXPLORING THE POWER OF SHARED DATASETS

Dear colleagues,

The Department of Digital Humanities at King’s College London is excited to announce a unique opportunity for scholars interested in the intersection of Classics and digital methodologies. We invite you to participate in our upcoming event entitled Data Driven Classics: Exploring the Power of Shared Datasets on 5th July 2024.

Date: 5th July 2024
Time: 10:00 AM – 5:00 PM
Venue: King’s College London, Embankment Room MB-1.1.4 (Macadam Building, Strand Campus)

About the Workshop:
The study of the ancient world increasingly relies on curated datasets, emphasising the importance of data sharing and reproducibility for open research in today’s technologically interconnected world. In this context, the workshop aims to achieve two main objectives:

  1. Raise awareness on the significance of datasets, data papers, and data-sharing for Classics.
  2. Guide classicists in identifying, utilising, and sharing datasets within the scientific community.

The workshop will consist of a one-day programme featuring engaging presentations, hands-on sessions, and roundtable discussions led by experts in the field. In the morning session, our four invited speakers will explore the importance of data-sharing and present case studies of published datasets in Classics, covering linguistic and historical-geographical perspectives. This will be followed by a general discussion on data use and sharing.

Invited speakers:

  • Dr Mandy Wigdorowitz (University of Cambridge), Humanities has a place in the open research and data sharing ecosystem
  • Paola Marongiu (University of Neuchâtel), Collecting, creating, sharing and reusing data in Classics: an overview of the best practices
  • Mathilde Bru (University College London), Building and publishing a dataset as a Classicist
  • Prof Claire Holleran (University of Exeter), Working with epigraphic datasets: mapping migration in Roman Hispania

In the early afternoon, participants will engage in hands-on activities, working in groups to describe datasets and identify their potential for reuse. They are encouraged to bring their own datasets, if available, to receive feedback from both the workshop facilitators and fellow participants. Feedback will focus not only on the quality of the data itself but also on the best practices for sharing it (e.g., format, open repository, deposition process). For those who do not have their own datasets, we will provide sample datasets to familiarise themselves with various repository types and data formats. Participants will also have the opportunity to learn about different platforms for data sharing and essential elements such as creating a README file and understanding its purpose. Discussions will also cover vital aspects such as licensing options and the significance of obtaining a DOI for datasets.

Who can attend:
This workshop is open to postgraduate students, researchers, and staff members interested in Classics, regardless of their level of expertise in digital methodologies. We especially encourage participation from those with an interest in linguistics, archaeology, history, and related fields. Participants are sought within and outside King’s College London. Preference will be given to applicants whose cover letters demonstrate that their research projects or professional pursuits benefit from the event. We also aim to maintain a balanced representation across disciplinary backgrounds.

Registration and logistics:
Seats for this workshop are limited. To apply for participation, please email Andrea Farina andGeorge Oliver at andrea.farina[at]kcl.ac.uk and george.oliver[at]kcl.ac.uk attaching a cover letter no longer than one page in .pdf format and writing “REGISTRATION Data Driven Classics” as the subject of your email. In your cover letter, please state your name, affiliation, position (student, PhD student, Lecturer etc.), email address, and your field in Classics (e.g., linguistics, history, etc.), and explain why you would like to attend the workshop and how it can benefit your research.

There is no registration fee for this event. However, participants are responsible for covering their travel expenses through their own institutions. The workshop will accommodate a maximum of 25 participants to ensure adequate assistance during the hands-on session.

Important dates:
Deadline to submit expression of interest with cover letter: 22nd May 2024.

Notification of acceptance: 31st May 2024.

Event: 5th July 2024.

Contact Information:
For any inquiries or further information, please contact Andrea Farina at andrea.farina[at]kcl.ac.uk or George Oliver at george.oliver[at]kcl.ac.uk.

For further info, please visit our webpage.

The organisers

Andrea Farina and George Oliver

Posted in Events, Teaching | Tagged , , , | Leave a comment

CFP Digital Classicist London 2024

The Digital Classicist London seminar invites proposals for the Summer 2024 series. We are looking for seminars on any aspect of the ancient or pre-colonial worlds, including history, archaeology, language, literature, cultural heritage or reception, that address innovative digital approaches to research, teaching, dissemination or engagement. We are particularly interested in proposals for seminars that think about digital capital, models of labour and credit, and community engagement with heritage and antiquity.

Seminars will be held fortnightly through June and July in the Institute of Classical Studies, Senate House, London, and will be simultaneously streamed to remote audiences on Youtube, but we hope most speakers will be physically present in London. We have a small budget to support travel for speakers within the UK.

Please send an abstract of 300 words to <gabriel.bodard@sas.ac.uk> (clearly marked Digital Classicist London​) by the end of Tuesday April 2, 2024.

Posted in Call for papers, seminar | Leave a comment

Tenure-Track job in Digital Classics, U Georgia

Forwarded for Erika Hermanowicz (to whom any enquiries should be addressed). The date for receipt of applications was 8 January, but has been extended until 15 February, 2024. I have highlighted in green the section relating to digital methods.

Tenure-Track Assistant Professor in Classics

The Department of Classics at the University of Georgia invites applications for a full-time tenure-track Assistant Professor in Data Analytics and Pedagogy in Classics with an anticipated start date of August 1, 2024.

Candidates should be prepared to teach classes in data collection, quantitative analysis, visualization, and AI learning based on data sets of archaeological, material, and/or literary evidence, with a focus on methodologies and pedagogy. We welcome applicants whose research spans any gamut of the classical to early modern eras, and applicants with expertise in any languages in the Mediterranean spectrum. Familiarity with economic history and its cultural contexts is preferred. Candidates must have a Ph.D. in Classics or a related discipline by time of appointment.

The successful candidate is expected to maintain an active research agenda, teach undergraduate and graduate courses (with a 2-2 teaching load), and contribute to departmental governance.

To apply, please submit dossiers containing a cover letter, cv, contact information for three references, and a writing sample (20 pages maximum). Applications should be submitted at https://www.ugajobsearch.com/postings/343145. Reference providers will be sent an email through the UGAJobs system with instructions on how to submit their letters of recommendation. Review of applications will begin on January 8, 2024 and continue until the position is filled.

The Franklin College of Arts and Sciences, its many units, and the University of Georgia are committed to sustaining a work and learning environment that is inclusive. The University is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, ethnicity, age, genetic information, disability, gender identity, sexual orientation, or protected veteran status. Persons needing accommodations or assistance with the accessibility of materials related to this search are encouraged to contact Central HR (hrweb@uga.edu).

Georgia is well known for its quality of life in regard to both outdoor and urban activities (www.georgia.org). UGA is a land and sea grant institution located in Athens, 65 miles northeast of Atlanta, the state capital (www.visitathensga.com; www.uga.edu).

For questions, contact the committee chair, Erika Hermanowicz, at: erikat@uga.edu

OR

Grace McGibney, Student Services Paraprofessional, at Grace.McGibney@uga.edu

website: http://www.classics.uga.edu

Posted in Jobs | Tagged | Leave a comment

Digital Classicist London 2023

Institute of Classical Studies, University of London
Fridays at 17:00 UK time in room 349*, Senate House, Malet Street, London WC1E 7HU and all seminars broadcast live to Youtube.

June 9: Panagiota Sarischouli (Thessaloniki), NOMINA Database: Names and Orality in Magic in Antiquity (Youtube) room 349

June 16: Kevin Wong (Harvard), Antiquity for Sale: Game Engines, Asset Stores, and the Platformization of the Classical Imagination in Videogame Development (Youtube) room 349

June 23: David Bamman (Berkeley), Latin BERT: A Contextual Language Model for Classical Philology (Youtube) room 349

July 14: Paola Marongiu (Neuchâtel), Barbara McGillivray (KCL), Lexical semantic change detection in Latin: a use-case on medical Latin (Youtube) *room 265

July 28: Luca Brunke (Exeter), Research-based 3D reconstructions of built heritage environments (Youtube) room 349

ALL WELCOME

The programme for the summer 2023 series of the Digital Classicist London seminar is now available. The seminar was organized by Gabriel Bodard (University of London), Megan Bushnell (UofL), Marco Dosi (UofL), Andrea Farina (KCL) and Paula Granados García (British Museum), and brings together presentations and discussions of innovative digital approaches to research, teaching, dissemination or engagement related to the ancient and pre-modern worlds.
Posted in seminar | Leave a comment

Digital Classicist London seminar 2023 CFP

Tondo of Greek kylix; boy with writing tabletCall for Presentations

With apologies for the tight deadline…

The Digital Classicist London seminar invites proposals for the Summer 2023 series. We are looking for seminars on any aspect of the ancient or pre-colonial worlds, including history, archaeology, language, literature, cultural heritage or reception, that address innovative digital approaches to research, teaching, dissemination or engagement. Seminars that speak to the ancient world beyond Greco-Roman antiquity are especially welcome.

Seminars will be held fortnightly through June and July in the Institute of Classical Studies, Senate House, London, and will be simultaneously streamed to remote audiences on Youtube, but we hope most speakers will be physically present in London. We have a small budget to support travel for speakers within the UK.

Please send an abstract of 300 words to <gabriel.bodard@sas.ac.uk> (clearly marked Digital Classicist London) by the end of Monday May 1.

Digital Classicist London 2023 is organised by Gabriel Bodard, Megan Bushnell, Marco Dosi, Andrea Farina and Paula Granados García.
Posted in Call for papers, seminar | Leave a comment

Mapping the veterans of Upper Moesia using ArcGIS

written by Ana Honcu, Post-doc researcher, University of Iași

This article is about mapping the veterans’ inscriptions from Upper Moesia using ArcGIS software. How can this be done and toward what aims? By applying GIS algorithms to these datasets, we can make various maps and queries that allow us to discuss distribution and spatiality from many different angles. Such an experiment provided us the possibility of better visualization of data in a geographic context and allowed us to identify the specific locations of inscriptions and thus draw conclusions about veterans’ settlements.

A geographic information system (GIS) is a system that creates, manages, analyses and maps all types of data. GIS connects data to a map, integrating location data (where things are) with all types of descriptive information. Simply stated, GIS technology provides a foundation for mapping and analyzing data in a geographical context. This allows better visualization of data in a unique format that leads to discoveries regarding patterns and relationships about and between the data itself.  GIS technology is increasingly employed in the field of archaeology, but it has not yet been extensively applied to epigraphic datasets. Rebecca Benefiel’s study of wall inscriptions from Pompeii offers an excellent demonstration of the potential of this technology in answering historical questions.

Fig. 1.png

We believe that GIS technology can be applied with great benefit also to the study of the veterans’ inscriptions from Upper Moesia. The province of Upper Moesia was selected as an area of interest for quantitative GIS exploration because it had a significant military presence, therefore a representative number of inscriptions, which can be used as indicators for the three measured variables: Roman veterans, their origin, and place of establishment. Creating a digital database that could be keyed to a geo-referenced map would greatly facilitate research involving inscriptions and spatial analysis. Our study has two goals: to demonstrate the usefulness of the application of GIS to a database containing an epigraphical corpus and to identify the main centers of settlement of veterans.

Fig. 1.2

The database was comprised of 164 inscriptions, collected from epigraphic corpora or online epigraphic databases. Fig.1, 1.2 displays the location of all veterans (legionaries and auxiliaries) in the province. The distribution of veteran settlements in Upper Moesia presents an expansive, complex spatial arrangement with 40 settlements of veterans. We can draw a first general conclusion – the preferred areas are the northern, southern, and northeastern provincial borders, cities with colonial (Scupi, Ratiaria) or municipal status (Singidunum, Horreum Margi, Naissus, Ulpiana), the provincial capital (Viminacium), legionary and auxiliary camps (Timacum Minus) castella (for example Bononia), but also settlements with economic potential.

Fig. 2

Fig. 2.1  

The following two maps pinpoint the locations of legionary (Fig 2, 2.1) and auxiliary veterans (Fig. 3). It can be stated that the use of ArcGIS maps and a specific database clearly help map separate categories of inscriptions. Not only the high concentration of inscriptions has a big visual impact, but so does their spread as well, because no part of the territory is empty. This could not be created without digital aid, and the maps illustrate how the topographical distribution of inscriptions can be analyzed in greater detail to provide a multi-layered picture of veterans. ArcGIS maps show that there is a visible clustering: the most numerous veterans come from the two legions stationed in the province (legio VII Claudia and legio IIII Flavia Felix) and the dispersion of the legionary veterans being present in the north and south of the province. Ex-soldiers settled in the north were located near the legionary camps, and the veterans settled in the south of the province were settled through deductio agraria. Most auxiliary veterans settle in the place where they performed their military service (for example, Timacum Minus).

Fig. 3

Fig. 4

Fig. 4.1

We have also applied some spatial analysis, in order to extract new information and meaning from the original data. The result of summarizing data tool is a new layer with polygons (grid/area) with a size of 40 km. The analysis calculated the number of points (location of veterans) that fit into each surface and the total number of inscriptions from the grid/area (Fig. 4, 4.1). The find locations tool identified existing features that meet a series of criteria we specify. In our case, the analysis pinpointed legionaries established at a distance of 200 km from the military camps (Fig. 5).

Fig. 5

In conclusion, viewing results on an interactive map offers the possibility to adjust and adapt analysis until we find the answers we need. Interactive maps create immersive experiences that transform maps from a static view into an opportunity for users to explore. The results demonstrate the potential of the GIS approach in testing the hypotheses produced by traditional epigraphic studies.

 

Posted in Uncategorized | Leave a comment