ABSTRACT

Since the twentieth century, the progress and development of linguistic steganography have been minimal. Academically, roughly five major approaches were introduced before Nostega-based methodologies were invented: null cipher, mimic functions, NICETEXT, and the noise-based approach which included translation-based, confusing, and SMS-based approaches. Text cover of these approaches has numerous flaws such as incorrect syntax, lexicon, rhetoric, and grammar. Additionally, the content of such text cover is often meaningless and semantically incoherent. Such detectable noise (flaws) can easily raise suspicion during examinations by both humans and machines. ese deficiencies render contemporary approaches (the non-Nostega-based) highly vulnerable. Unlike all other approaches, the Normal Linguistic Steganography (NORMALS) Methodology neither generates noise nor uses noisy text to camouflage data [26]. is is because NORMALS methodology is based on the Nostega paradigm. NORMALS employs Natural Language Generation (NLG) techniques to generate noiseless (flawless) and legitimate text cover by manipulating the non-random series input parameters of an NLG system in order to camouflage data in the generated text. As a result, NORMALS is capable of fooling both human and machine examinations. To emphasize, NORMALS is unlike Matlist methodology because NORMALS is capable of handling non-random series domains, as demonstrated in this chapter. e implementation validation of NORMALS demonstrates that there is room for clever concealing of data within adequate bitrates. e steganalysis validation confirms the robustness of achieving the steganographic goal as shown later in the steganalysis chapter of this book.