ABSTRACT

This chapter presents a compact review of theoretical results obtained in recently studied 1D and 2D models, which predict a generic method for supporting stable spatial solitons in dissipative optical media, based on the use of linear gain applied in narrow active segments implanted into the lossy waveguide. Spatial dissipative solitons (SDSs) are self-trapped beams of light or plasmonic waves propagating in planar or bulk waveguides. They result from the balance between diffraction and self-focusing nonlinearity, which is maintained simultaneously with the balance between the material loss and compensating gain. Due to their basic nature, SDSs are modes of profound significance to nonlinear photonics, as concerns the fundamental studies and potential applications alike. In terms of the theoretical description, basic models of SDS dynamics make use of complex Ginzburg–Landau equations. Models combining localized gain and uniformly distributed Kerr nonlinearity and linear loss have been recently developed in various directions.