Initially, I was going to use the interview I had conducted with Red Rider, a pilot and instructor at Mesa del Rey Flight School, to transcribe on OHMS, however, the access to the recording has been delayed because of technical difficulties. Thus, to practice using OHMS, I chose an interview from YouTube with another WWII pilot from a collection of interviews in the AMC Museum archives. I admit that I ended up watching the tutorial on how to index the recording. While the metadata was not too difficult o complete, something was missing when I uploaded the YouTube video because it just would not play. I ended up comparing data of my Media URL to others to notice the changes I had to make. When the interview was finally rolling, I was able to begin to tag it and work on the transcription. I found that by doing this, stopping at point of the interview, writing the partial transcript and synopsis, the interview yielded its unique structure. Oral histories could be challenging to make sense of because the chronology of the interview, in spite of starting with birth date, childhood, etc., can easily shift with stories popping up out of sync. The transcription helps bring focus to these stories and offers the possibility of connecting them to historical facts, places, and time. Hence, users may jump to a specific story when searching a topic. This in turn made me rethink the titles of those segments I had decided to stop at and change. When I will be using OHMS to transcribe the interview for my Pilot’s Log site, I will need to consider being specific about these stopping point, how they connect to other information on the site, and what could be gained by others in their search.
The opportunities created by new web-based free open source technologies in Humanities have been significant in re-examining, interpreting, and relating information from literary, cultural, social, political, or historical data. As we worked with the Library of Congress’ SWA Slave Narratives dataset, it was a journey to discover the variety of ways to visualize the intricate elements and connections within by using three of the visualization tools.
Voyant is a text mining tool that manipulates text through a range of analysis, such as cirrus, graphs, context, and verbal patterns reveal determining elements of the given data. To approach text through word count or patterns is to break it down to a level that we could not accomplish easily on our own. I feel that this tool brings out the other human side of text that reveal words and meaning through numbers.
CartoDB sets humanists off on a different journey. It pulls the data from the set and spreads it out on a map. It layers information and over laps the data to show a geographic relevance in the text. The geography of the interviews in the slave narratives provided a new narrative that grew out of the reality of the locations.
Palladio seemed the easiest to work with as far as the interface concerned. It provides a map of such informational detail that is probably the most difficult to gather by hand. It is an excellent tool to analyze relationships between people, locations, events, and ideas. These connections are illuminating relationships that have the possibility to create a new world of understanding about something we thought we knew. For instance, Palladio paints a busy visual of the relationships in the Jazz World, manages to make sense of the communication around The Letters of the Republic, and gives layers of insight into the slave narratives.
In comparing these three digital tools, in light of the slave narratives, I’d prefer to use Palladio. It seems that the network analysis of those narratives provides a deeper insight. However, there is no reason not use all three and compare the results and the questions each tool was able to bring up or answer. Although the nature of the project and the type of text/dataset and desired results may narrow the tools down to text mining or mapping there is no reason not use them interchangeably. Overall, these tools are able do something we humans would or could not easily produce, thus the possibilities are endless.
Palladio is a free web-based tool that maps relationships and network in a given dataset. The feature we have experimented with is the mapping of relationships between data points and people by using the WPA Slave Narrative Project from the Library of Congress’s. After having seen this narrative data mapped on CartoDB, were able to discover relationships between the interviewers and the interviewees, see connections relevant to the various data points, such as topics discussed by male vs. female slaves or topics generated based the the type of slave interviewed.
The first steps to embark on visualizing connections within a data is to upload the dataset to Palladio.
Layers of data can be uploaded and compared to reveal relationships, which based on manipulating a variety of elements of the data could result in mapping specific connections and yielding new insight into the data.
The above visual map shows the distribution of interviews by location. While the image below reveals connections determined by gender in connection to topic discussed.
Overall, Palladio was a great tool to venture beyond the geographic visualization of these interviews. Its usefulness was apparent when we began to experiment with different data points in the slave narratives to visualize the relationships between relevance of topic based on age, gender, or type.
This is my first time using CartoDB, so I am still experimenting with it and exploring the multitude of ways to visualize locations and their interrelations to the data uploaded into this system of tools. Their motto, “Predict Through Location” is well chosen because it allows for users to make inferences based on their dataset that they would not have come to without this tool.
CartoDBhttps://carto.com/ is a free open source software that allows users to map data efficiently. It creates maps using location data and track people or events over time. Users with a clear, organized dataset can easily upload various information and map out its component. The dataset for my project was information from a collection of interviews of former slaves in Alabama. The first map created was a simple view of the location of each interview. However, the information can be altered by clicking on the data view and choosing a certain element of the dataset, so the user can get the information about that point.
There are many ways to utilize CartoDB and present the information through a variety of maps. By clicking on the Paintbrush, the map layer wizard gives several options to reformulate the map with a different focus. Users can choose from fixed or animated maps, such as Cluster, Torque, Category, or Intensity. Each map type allows for further customization.
Cluster Map Image
Intensity Map Image
Heat Map Image
Category Map Image
At the bottom left of the screen, users can change the look of the map by changing the basemap.
Two of the basemap options showing the Slave Narrative data:
The Options tool to the right of “Change basemap” allows for options on the face of the map to describe something.
Users may add additional layers of dataset by clicking on the + sign in the toolbar.
Image of a layered map
The desired image of a map may be exported by clicking on Export Image. Save it as jpg or png.
CARTODB is an easy to use mapping tool and I am excited to explore it further.
Voyant is a web-based text analysis tool that allows users to consider a single or a collection of textual documents. By entering the URLs of the text or pasting in the full text, users can upload the material to be analyzed and revealed. http://voyant-tools.org/
There are several tools available in this software. The word cloud is part of Cirrus, which reveals the frequency of words found in a given document by displaying the most frequently used words larger than other recurring words. The number of words appearing in the cloud
could be changed by adding stop words to eliminate the most commonly used words in the document that do not add to its value in visualizing the more relevant words.
The Terms function in Cirrus displays these words in order of their frequency and by clicking on a given word, the correlating graph will appear in Trends. The links button allows users to see the words that show up around the most frequently used words and see which connections happen the most often to reveal certain textual associations. In the reader section, specific document will appear where the target words are highlighted and by clicking on them, they will appear in the Trends function, which graphs the frequency of that word in the document. If the user is looking through a collection of texts, the Trend function will show the frequency of the selected word in the form of a graph in each document. The document terms in this section will show a list revealing how many times the chosen word appears in each document. This graph function may also compare several or all major words in a document in relation to each other. The Summary part allows the users to examine the most distinctive words. The context section shows where the word in question appears in the text and enables the user to read the surrounding text.
Each section taps into a certain aspect of the text as it collects the “word data,” however; the accumulation of all sections will reveal valuable connections and perhaps raise questions regarding the text. In our case, looking at the slave narratives, certain interviews contained more of the word mistress others mentioned the master, so this knowledge may lead to selecting certain documents for further investigation with a discernible purpose for analysis. Literary texts may reveal concentration of themes or ideas around a certain word or name of a character or place. However, the frequency of names and places could also prove to be insignificant depending on the focus of the analysis.
Uploading a single text is probably a good way to start Voyant to allow the user to play around with the different functions and examine their results. Although this tool is extremely useful with large amounts of text, it could also become tasking to refresh and reload data as its performance seem to slow with more text added.
- More or less successful practice in setting up Omeka.