Lab Report

In this paper, I will be dissecting and analyzing 7 elements of writing in 2 lab reports, assessing whether both lab reports sufficiently included multiple aspects in forming a strong report. The elements I will be exploring are the title, abstract, introduction, methods, results, discussion, and conclusion. The lab report, “A study on classification algorithm of REM sleep behavior disorder” by Fumiya Kinoshita and others, describes their study in developing an algorithm that could diagnose people with REM sleep behavior disorder (RBD) because the current method of diagnosis by visual inspections is timely and costly. RBD is a disorder in which you act out your dreams because of sleeping without atonia, which is needed for the muscles to relax. The muscles become tense and active instead, which could cause harm to oneself or another. The next lab report, “Country-level cybersecurity posture assessment: Study and analysis of practices” by Ashutosh Bahuguna and others, examines different cybersecurity methods practiced by 37 countries and analyzes these methods to assess which seems to be the most effective and what can be done to improve on them all. I will be referring to these lab reports as lab report 1 and lab report 2 respectively.

Both articles similarly have concise titles that are easy to understand but are overly general compared to the textbook’s format, which may not be beneficial for a researcher looking for more information on a specific topic. The title for lab report 1, “A study on classification algorithm of REM sleep behavior disorder”, contains keywords that aren’t difficult to understand and are interesting for readers coming across this. The term “classification algorithm” implies that this lab report is intended for audiences well-versed in machine learning because this term is a technique for categorizing observations, used in machine learning. However, it contains more general terms and would be better if it were more specific about what kind of algorithm was being developed or what kind of problem this algorithm would be solved as described throughout the paper. The same can be said about the second article, titled “Country-level cybersecurity posture assessment: Study and analysis of practices”. There are some helpful keywords that are easy to comprehend, and cybersecurity assessment could be a topic readers would want to know more about. However, it could be more specific in terms of what kinds of practices are being studied and analyzed or what the issue at hand is that is discussed throughout the paper. Based on this, I believe that the authors of both articles decided to make their titles short because they wanted easy readability for researchers.

For lab report 1, the writers created an abstract following the criteria of the textbook’s format. The authors summarize each of the elements used throughout the paper in one to two sentences, giving a general idea of how the lab report will be structured and what it will discuss. For example, the writers start the abstract off by explaining why they focused on the topic of sleep disorders, which was because of how sleep disorders have “increased in prevalence” and that “the sleep behavior disorder (RBD) is well known” (Kinoshita et al., 2019, p. 9). These are the same ideas that are explained deeper in the introduction but serve as a brief overview of the topic of sleeping disorders. Another example of effective formatting includes how the writer ends the abstract, explaining that at the end of this experiment, “the author has succeeded in finding an automated algorithm to yield results that are not significantly different from those of visual inspection diagnosis” (Kinoshita et al., 2019, p. 9). The writers briefly summarize the results of their experiment, which was that the algorithm they developed in diagnosing people with RBD through sleeping behaviors was nearly as accurate as the typical method of diagnosing RBD, which is through visual inspection or recognizing common symptoms in someone.

On the other hand, the authors of lab report 2 wrote an abstract that doesn’t summarize all the elements of a report. The writers write the first paragraph of the abstract emphasizing that studying cybersecurity in different countries is important because they must “generate assurance” because of increasing “exposure of critical infrastructure to cyber-attacks” (Bahuguna et al., 2020, p. 250). This is a crucial element that is further expanded on in the introduction of the lab report. Furthermore, the writers explain throughout the second paragraph of the abstract that the purpose of the paper is to “offer a new perspective on country-wide cyber-security benchmarking and assurance” (Bahuguna et al., 2020, p. 250). However, the writers don’t provide a thorough description of the elements used throughout the paper as they continue to repeat their purpose in paragraph two, such as how they will “understand the global scenario and identification of different methods adopted for a cybersecurity posture assessment” (Bahuguna et al., 2020, p. 250). While the authors are furthering ideas that they will discuss in their introduction, there doesn’t seem to be any mention of the other elements they discussed. These writers may have taken this approach in writing their abstract like this because they are examining cybersecurity methods already in effect throughout different countries, rather than developing a new method to test throughout these countries. Instead, they are understanding cybersecurity practices, which include analyzing previous research and studying existing data about the performance of these and what they believe can improve from this. I also assume that more elements weren’t mentioned in the abstract because there is so much data being discussed and could be difficult to summarize. For instance, in the analysis and discussion section, the writers expand on 7 benchmarking activity methods, which are practices used to assess if a specific cybersecurity assessment model is optimal. The abstract is not convincing enough for readers to read the rest of the report.

Both lab reports have strong introductions that mostly follow the format of the textbook’s checklist. To begin, the authors of lab report 1 thoroughly discuss the importance of studying sleep disorders as “ignoring RBD may result in injuring not only the patient but also his/her family or cohabitants” (Kinoshita et al., 2019, p. 10). The authors include previous studies for more context and delve more into the specific terms and conditions they reference, describing the distinct brain and body activity throughout different stages of sleep (Kinoshita et al., 2019, p. 9). They even explain why medical practitioners must diagnose people with RBD through visually observing symptoms as there were “no clear established criteria for this” (Kinoshita et al., 2019, p. 10). However, the method that will be used is not briefly described and I believe that the writers of lab report 1 decided not to mention this because their method section seems to base their algorithm upon many criteria and factors, which are difficult to generally summarize. For the second lab report, the authors also stress the importance of studying cybersecurity assessment as cyber-attacks “reduce available state resources and undermine confidence in their supporting structures” including their “national security, economy, and public safety” (Bahuguna et al., 2020, p. 250). The writers have included older policies and initiatives to provide more context, such as India’s 2013 National Cybersecurity policy, which had an “assurance framework” using strategies like “cybersecurity drills, sectoral drills… security auditing,” and more (Bahuguna et al., 2020, p. 251). Additionally, the methods of studying and analyzing cybersecurity assessment data from 37 countries are detailed, some of which include gathering data about “Types of cybersecurity posture assessment activities conducted in a country, methods and tools used for conducting assessment activities, frequency of assessment activities,” and 4 more filter categories (Bahuguna et al., 2020, p. 251). Finally, they conclude their introduction by explaining that their studies contribute new knowledge to this field of research by “present[ing] a new perspective in understanding methods adopted by countries for cybersecurity posture assessment” (Bahuguna et al., 2020, p. 251). Thus, lab report 2 follows the textbook’s guidelines more closely than lab report 1.

Lab report 1 lists the different criteria and manuals used to measure muscle activity and other activity in the testing patients. While it was already established in the previous section that visual judgment criteria of the American Academy of Sleep Medicine (AASM) would be used as a basis for the algorithm’s development, there are other judgment methods that are used to compare these results further. The judgment methods created by “Montplaisir and Simbar” are used (Kinoshita et al., 2019, p. 10). However, the writers have failed to explain why they are comparing these results with these other judgment types and the following criteria describe activity with complex terms. For instance, for the visual judgment method by Montplaisir, the description for phasic activity states: “2-s mini-epochs of muscle activity with amplitude of at least fourfold background in mentalis muscle EMG or limb EMG” (Kinoshita et al., 2019, p. 10). I believe that the authors have written this report without explaining much because not everyone can easily replicate this experiment on their own. Their audience would have to have certain licenses and qualifications to monitor patient activity like this and would have to be well-versed in terminology and concepts of the sleep disorder field. As opposed to this, the second lab report includes the steps taken and reasons why these steps were taken with explanations of which sources would be considered throughout. An example of this is when the authors decided to incorporate data from internal sources because they were “from trusted agencies like government reports” and were even checked for validity using the “Method of Data Triangulation” if the same data is seen “from two or more sources” (Bahuguna et al., 2020, p. 252). It is structured linearly of their process from beginning to end and is written in much detail so that other researchers could conduct this research process themselves. For instance, the methodology section starts off by explaining how the research was done, using “research databases, Google Scholar, and Google operator search” which were then “segregated into 3 categories based on the level (organization, sector & nation)” (Bahuguna et al., 2020, p. 252). There is even a diagram of the steps taken throughout this process to give researchers a better overview of their method (Bahuguna et al., 2020, p. 252). Based on these conclusions, lab report 1 doesn’t follow the textbook’s guide as well as lab report 2 does.

Lab report 1 meets most of the criteria for formatting results effectively compared to the second lab report. The authors of lab report 1 provide diagrams of recorded EMG levels of patients over time, comparing the tonic and phasic activity, a bar graph of the visual judgment and automatic algorithm data, and a chart recording the number of people incorrectly diagnosed. The following results are addressed and emphasized with major trends, showing that the automatic algorithm nearly tracked similar amounts of tonic and phasic activity as visual judgers. The writers support these claims by referencing the visual judgment “ratio was about 36% for tonic activity and about 11% for phasic activity” while the automated algorithm held “42% and 6%” respectively (Kinoshita et al., 2019, p. 12). The results of the other judgment methods are included. There was an incorrect judgment of “12% for AASM Scoring Manual, 24% for Montplaisir method, and 4% for Simbar method” (Kinoshita et al., 2019, p. 13). There is a clear reference of data to the diagrams they came from, allowing for easy readability of the results. For the second lab report, they only mention the five attributes that they came down that are commonly found in assessing benchmarking activities, including “Types of Activities for Cybersecurity Posture Benchmarking,” “Specific Methods and Tools used”, “Frequency of … Activities”, and much more (Bahuguna et al., 2020, p. 253). There is not much else to report in terms of results because this was the main goal of the paper. Therefore, while both lab reports do meet some of the textbook’s results format, the second lab report needs more of an explanation as to why they came down to the results they had.

For the discussion sections, both lab reports meet every requirement in what consists of a strong discussion section. The writers of lab report 1 do a good job analyzing the data they collected as well as explaining why some of their results haven’t met their expectations. They repeat that the “occurrence ratio of tonic activity was about 7% higher, and occurrence ratio of phasic activity was about 5% lower” possibly because “the window width in rectified averaging was fixed at 0.2 s” (Kinoshita et al., 2019, p. 13). They provide methods for improving their experiment, including setting a smaller window for recording tonic activity and a bigger window for recording phasic activity to get accurate results (Kinoshita et al., 2019, p. 13).

The authors of the second lab report organized important categories in a chart listing the different benchmarking activity categories, the description of each, the number of countries that use each activity, and references to them (Bahuguna et al., 2020, p. 253). They expand on each activity, such as for the benchmarking activity Cybersecurity Exercises, practiced the most out of other activities (used by 14 countries), because it allows “readiness of the participating entity and improving coordination & cooperation among entities” (Bahuguna et al., 2020, p. 253). Throughout the discussions, different realizations are mentioned about the research process such as limitations. For instance, there was a struggle to find new assessment methods because “cybersecurity activities are… held confidential by governments” and some cybersecurity issues might “not appear using [their] search method” (Bahuguna et al., 2020, p. 256). Both lab reports effectively included all the aspects required for a strong discussion section.

Both lab reports form a cohesive conclusion that summarizes the most important parts of their respective papers in one to two paragraphs. They restate the purpose of their studies and offer areas to improve on. For lab report 1, different visual inspection methods were integrated into the automated algorithm to assess which method would be best for accurate RWA diagnosis (Kinoshita et al., 2019, p. 14). However, they state that their algorithm can be improved through more testing, refining restrictions, and receiving feedback from medical technologists who provide visual inspection judgments (Kinoshita et al., 2019, p. 14). For lab report 2, it is explained how there were many things hindering the best conclusions to be determined, such as limited data being reported about methods used throughout a country, and a mix of methods being used rather than specifically formal methods (Bahuguna et al., 2020, p. 257). Overall, both lab reports do a good job of creating a concise and summarized conclusion.

To sum up, lab reports 1 and 2 have structured and explained their points effectively to some extent. However, upon analyzing each lab report further, I felt that lab report 1 had a sounder paper that was very informative and interesting and presented in a better manner than lab report 2. This may be due to lab report 2 not actually conducting an experiment that has made it seem more general about the information explained.

 

Self-Reflection

I didn’t have much difficulty looking for lab reports as I didn’t have anything too specific in mind. As a computer science major, I was interested in searching for topics discussing cybersecurity and the use of AI or algorithms which I easily found. However, I didn’t realize how complex some parts of the lab reports both were until I gave them a deeper read and annotated it. I still worked with them because I already spent so much time on them and still had a general understanding of what was discussed. However, this has affected my explanation for a few elements. In terms of writing the paper up, I didn’t have much of a struggle in analyzing and comparing the lab reports to each other because following the guidelines in Chapter 19 were very clear. If I were to write a lab report in the future, I would strive to find my 2 lab reports and analyze them as soon as possible so that I can always have more time to research complex ideas discussed or even change my lab reports of choice.

Some of the revisions I have made throughout my paper include properly citing quotes using APA formatting, clarifying which lab report I am discussing, and including an introduction of context thoroughly explaining concepts and terms. I also reordered some sentences which allowed my essay to flow better and served as a good transition for the next paragraph.

A challenge I had throughout the process of this assignment was annotating the studies that I have read because of the complex terminology and ideas that were referenced in both lab reports. While I didn’t necessarily have to understand these fully due to this being out of the scope of my knowledge and the papers being written for more professed individuals in those fields, I at least had a general understanding of the main issues, approaches, and results.

I learned so much from Chapter 19 and analyzing the conventions of writing a good lab report through this assignment has solidified my understanding of how a lab report must be written. It was interesting to see how the lab reports I analyzed both missed some aspects of some elements and being able to understand why the writers chose to do this really guided me to think about the intended audience and purpose of the reports. I also felt that the peer review was beneficial to my learning as I was able to see how others formatted this assignment differently from mine and had a better understanding of how to write certain aspects missing from my writing by reading my peers’ writing.

 

Works Cited

  1. Bahuguna, A., Bisht, R. K., & Pande, J. (2020). Country-level cybersecurity posture assessment:Study and analysis of practices. Information Security Journal: A Global Perspective, 29(5), 250–266. https://doi-org.ccny-proxy1.libr.ccny.cuny.edu/10.1080/19393555.2020.1767239
  2. Kinoshita, F., Takada, H., & Nakayama, M. (2019). A study on classification algorithm of REM sleep behavior disorder. Electronics & Communications in Japan, 102(2), 9–14. https://doi-org.ccny-proxy1.libr.ccny.cuny.edu/10.1002/ecj.12141