Newsletter subscription
Please enter your e-mail address to subscribe to our newsletter.                     
 
E-Mail*
Title
First Name
Last Name
Company



The protection of your personal data is very important to us. Please refer to our data protection .

Video steganography

There are different approaches to embed data or messages (secrets) in multimedia files. Different algorithms and techniques shall be evaluated and tested regarding robustness against manipulations and the imperceptibility of the secret. Of particular interest are the motion vector based approaches and the recovery of data or messages after a video has been manipulated, e.g. a scene filmed with a smartphone camera.

Robustness attacks:

  • rotation
  • scaling
  • flipping
  • compression
  • noising
  • cropping
  • frame dropping
  • color / luminance / chrominance manipulations
  • re-recording with external camera


Your tasks:

  • Evaluate video steganography regarding robustness and imperceptibility
  • Develop the most promising approach to encode and decode a secret in a video file
  • Evaluate your Implementation on different attacks


Required skills:

  • Programming: Python, Java, C++
  • Image / video processing: OpenCV, ffmpeg

Related links:


Supervisor: Sebastian Schmidt

Multimedia fingerprinting and similarity

There are different approaches to create fingerprints (hashs) of multimedia data. Different algorithms and techniques shall be evaluated regarding robustness against different manipulations and attacks and tested if they are identical in the terminal, web browser as well as on a smartphone. Especially interesting is the "perceptual hashing" approach applied on a recording the multimedia data from a camera, e.g. smartphone camera.

Robustness attacks:

  • rotation
  • scaling
  • flipping
  • compression
  • noising
  • cropping
  • color / luminance / chrominance manipulations
  • re-recording with external camera


Your tasks:

  • Evaluate fingerprinting technique regarding multimedia data and robustness:

  • Implement the most promising approach as framework for Smartphone

  • Evaluate your Implementation on different browsers and Smartphones, e.g.:

    • execution time
    • True positives / false positives
    • False negatives / true negatives
    • accuracy

Required skills:

  • Web Technologies: JavaScript, HTML, CSS
  • Image/video processing: OpenCV, ffmpeg
  • Mobile Development: React-Native, Java

Related links:

Supervisor: Sebastian Schmidt

Context-aware encoding and dynamic encoding ladders

A sophisticated video encoding chain is a key factor for the successful delivery of media content over the internet. For a long time, the same encoding parameters were used for different types of content, resulting in a single static encoding ladder. However, as video content highly differs in terms of complexity and details, it is recommended to use custom encoding settings for individual media files. Depending on the characteristics of an asset the best encoding parameters are identified. As a result, an optimal video quality is achieved.

Your tasks:

  • Evaluate classic per-title encoding approaches and identify potential weaknesses
  • Apply machine learning and AI technologies to improve existing per-title encoding solutions
  • Use your trained machine learning models to predict quality scores like VMAF and PSNR

Required skills:

  • Basic understanding/interest in media encoding
  • Basic understanding of machine learning 

Related links:

Supervisors: Daniel Silhavy, Christopher Krauß

Optimization of LSTM time-series forecasting

In contrast to one-step ahead forecasting multi-step ahead forecasting deals with problems with the accumulation of errors, reduced accuracy, and increased uncertainty. Machine learning approaches supplant the application of traditional approaches like the use of linear statistical method AutoRegressive Integrated Moving Average (ARIMA) models. The concentration on machine learning solutions takes places in many different fields, where forecasting is no exception. Especially in the forecasting of non-linear time series, as methods like ARIMA assume that the observed data are linearly related. Recently, researchers compared stochastic and machine learning methods for multi-step ahead forecasting and came to the conclusion that stochastic and machine learning models can perform equally well. This insight is reflected in the results of the last M4 forecasting competition, where an Recurrent Neural Network (RNN) Exponential Smoothing (ES) hybrid method won.

However, many researchers are using unreliable loss functions like the Root Mean Square Error (RMSE) for forecast training. Moreover, do they only provide point forecasts and not an interval in which the future values can fall within.

What we have is a deep neural network of LSTM layers which is designed and trained on a custom loss function. The custom loss function allows the network to return not only a point forecast but also a prediction interval in which the forecast can lie in within a probability of 95%. The loss function is a combination of a weighted MASE distance measure and a modified LUBE function. Finding the right hyperparameters proved to be the most challenging task. In average, the approach performed by 67% better than the unmodified loss function. It approximated the curve of the naive benchmark model with the additional benefits of prediction intervals.


Your tasks:

Given is a LSTM network architecture with a custom loss function for forecasting of point forecast and prediction intervals. The point forecast is the estimated real future value, whereas the prediction interval is an interval stating where the value with lie between with a set probability. The custom loss function is a combination of the point forecast loss function based on the Mean Absolute Scaled Error (MASE) and the Lower Upper Bound Estimation (LUBE) loss function.

Your task is to reduce the errors in the forecasting results for time-series data by integrating additional existing approaching, optimizing the given algorithms (e.g., by hyper-parameter tuning) or developing new algorithms that outperform the given ones. This might also be a data set specific improvement - e.g., because the new algorithm works better on weather data than on stock prices or vice-versa. The algorithms shall be evaluated with a fixed time-window cross validation. A reference data set is given for this project.

For the identification of the right hyper-parameters for the various functions and network, your tasks involve:

  • Finding the best parameters for the custom loss function by for example optimizing with one LSTM cell
  • Finding the best hyper-parameters for the networking, including learning rate, learning decay, number of hidden layers, and unit numbers
  • Training of the network on a given data set
  • Evaluation of the network

Required skills:

  • Understanding of Artificial Neural Networks
  • Understanding of regresssion tasks in machine learning
  • Experience with python and numpy
  • Experience with machine learning frameworks especially keras and tensorflow
  • Knowing the purpose of a loss function and understanding optimizers for ml training

Related links:

Supervisor: Christopher Krauß