Skip to main content
Banner [Small]

Test out our new Bento Search

test area
x
# results
shortcut
Sections
HTML elements
Section Tiles
expand
Tile Cover
Mouse
Math Lab
Space
Tile Short Summary
Math Lab Rooms located in the Main Library in rooms 300X and 300Y
expand
Tile Cover
coffee
CC's Coffee House
Space
Tile Short Summary
Located at the first floor of the LSU Main Library.
expand
Tile Cover
People troubleshooting on a computer
Ask Us
Service
Tile Short Summary
Check our FAQs, submit a question using our form, or launch the chat widget to find help.

Website

207

Gear

44

FAQ

169
Processed vs unprocessed collection--what's the difference?
A processed collection has gone through several steps to become a cataloged record, thus available to the researching public. Those steps include a thorough vetting of copyright and restrictions, a verbatim transcription or thorough indexing of the interview including time-stamped calibration, the opportunity for the interviewee to review the transcription, the creation of a finding aid that includes important metadata about the collection, the preservation and optimization of audio files, the creation of user-copies, and cataloging. This process requires the efforts of several LSU Libraries staff members and it has been calculated that for every hour of recording, it takes 35-50 hours to fully process. For a detailed breakdown of the stages and fees associated with archiving oral histories, please see The Oral History Budget. All processed collections are found in the catalog record and many are available on the Louisiana Digital Library. An unprocessed collection is one that has not reached the final stage of completion and is not yet ready to be cataloged. Depending on the stage of processing, more or less of the interview will be available to patrons. See below for the availability of unprocessed collections. An unprocessed collection is not in the catalog record nor the Louisiana Digital Library. A processed collection has gone through several steps to become a cataloged record, thus available to the researching public. Those steps include a thorough vetting of copyright and restrictions, a verbatim transcription or thorough indexing of the interview including time-stamped calibration, the opportunity for the interviewee to review the transcription, the creation of a finding aid that includes important metadata about the collection, the preservation and optimization of audio files, the creation of user-copies, and cataloging. This process requires the efforts of several LSU Libraries staff members and it has been calculated that for every hour of recording, it takes 35-50 hours to fully process. For a detailed breakdown of the stages and fees associated with archiving oral histories, please see The Oral History Budget. All processed collections are found in the catalog record and many are available on the Louisiana Digital Library. An unprocessed collection is one that has not reached the final stage of completion and is not yet ready to be cataloged. Depending on the stage of processing, more or less of the interview will be available to patrons. See below for the availability of unprocessed collections. An unprocessed collection is not in the catalog record nor the Louisiana Digital Library. Answered by: Jennifer Cramer
What are Special Collections?
Special collections refer to unique materials that provide both primary and secondary sources to people conducting original research. Our collections are special due to their scarcity or rarity, historical value, monetary value, or research value. Archives are collections of original records created throughout the lifespan of a person, family, organization, or business. These materials essentially provide evidence of the activities, events, functions, and/or responsibilities of the creator(s). Archives and special collections differ from libraries in the types of materials collected and the ways in which they are acquired, organized, described, and made publicly accessible. These differences prompt us to create specific policies and procedures to ensure that our collections can continue to be used for decades or even centuries to come. Special collections refer to unique materials that provide both primary and secondary sources to people conducting original research. Our collections are special due to their scarcity or rarity, historical value, monetary value, or research value. Archives are collections of original records created throughout the lifespan of a person, family, organization, or business. These materials essentially provide evidence of the activities, events, functions, and/or responsibilities of the creator(s). Archives and special collections differ from libraries in the types of materials collected and the ways in which they are acquired, organized, described, and made publicly accessible. These differences prompt us to create specific policies and procedures to ensure that our collections can continue to be used for decades or even centuries to come. Answered by: Kelly Larson

Database Listing

375

Staff

101

Discovery

2058671
Localization of try block and generation of catch block to handle exception using an improved LSTM
Several contemporary programming languages, including Java, have exception management as a crucial built-in feature. By employing try-catch blocks, it enables developers to handle unusual or unexpected conditions that might arise at runtime beforehand. If exception management is neglected or applied improperly, it may result in serious incidents like equipment failure. Exception handling mechanisms are difficult to implement and time expensive with the preceding methodologies. This research introduces an efficient Long Short Term Memory (LSTM) technique for handling the exceptions automatically, which can identify the locations of the try blocks and automatically create the catch blocks. Bulky java code is collected from GitHub and splitted into several different fragments. For localization of the try block, Bidirectional LSTM (BiLSTM) is used initially as a token level encoder and then as a statement-level encoder. Then, the Support Vector Machine (SVM) is used to predict the try block present in the given source code. For generating a catch block, BiLSTM is initially used as an encoder, and LSTM is used as a decoder. Then, SVM is used here to predict the noisy tokens. The loss functions of this encoder-decoder model have been trained to be as small as possible. The trained model then uses the black widow method to forecast the following tokens one by one and then generates the entire catch block. The proposed work reaches 85% accuracy for try block localization and 50% accuracy for catch block generation. An improved LSTM with an attention mechanism method produces an optimal solution compared to the existing techniques. Thus the proposed method is the best choice for handling the exceptions.