Dan Ruta

Dan Ruta

Researcher/Developer

Publications
DIFF-NST: Diffusion Interleaving For deFormable Neural Style Transfer
2023 | Under Review -  Dan Ruta, Gemma Canet Tarrés, Andrew Gilbert, Eli Shechtman, Nick Kolkin, John Collomosse
Neural Style Transfer (NST) is the field of study applying neural techniques to modify the artistic appearance of a content image to match the style of a reference style image. Traditionally, NST methods have focused on texture-based image edits, affecting mostly low level information and keeping most image structures the same. However, style-based deformation of the content is desirable for some styles, especially in cases where the style is abstract or the primary concept of the style is in its deformed rendition of some content. With the recent introduction of diffusion models, such as Stable Diffusion, we can access far more powerful image generation techniques, enabling new possibilities. In our work, we propose using this new class of models to perform style transfer while enabling deformable style transfer, an elusive capability in previous models. We show how leveraging the priors of these models can expose new artistic controls at inference time, and we document our findings in exploring this new direction for the field of style transfer.
ALADIN-NST: Self-supervised disentangled representation learning of artistic style through Neural Style Transfer
2023 | Under Review -  Dan Ruta, Gemma Canet Tarrés, Alexander Black, Andrew Gilbert, John Collomosse
Representation learning aims to discover individual salient features of a domain in a compact and descriptive form that strongly identifies the unique characteristics of a given sample respective to its domain. Existing works in visual style representation literature have tried to disentangle style from content during training explicitly. A complete separation between these has yet to be fully achieved. Our paper aims to learn a representation of visual artistic style more strongly disentangled from the semantic content depicted in an image. We use Neural Style Transfer (NST) to measure and drive the learning signal and achieve state-of-the-art representation learning on explicitly disentangled metrics. We show that strongly addressing the disentanglement of style and content leads to large gains in style-specific metrics, encoding far less semantic information and achieving state-of-the-art accuracy in downstream multimodal applications.
NeAT: Neural Artistic Tracing for Beautiful Style Transfer
2023 | Under Review -  Dan Ruta, Andrew Gilbert, John Collomosse, Eli Shechtman, Nicholas Kolkin
Style transfer is the task of reproducing the semantic contents of a source image in the artistic style of a second target image. In this paper, we present NeAT, a new state-of-the art feed-forward style transfer method. We re-formulate feed-forward style transfer as image editing, rather than image generation, resulting in a model which improves over the state-of-the-art in both preserving the source content and matching the target style. An important component of our model's success is identifying and fixing "style halos", a commonly occurring artefact across many style transfer techniques. In addition to training and testing on standard datasets, we introduce the BBST-4M dataset, a new, large scale, high resolution dataset of 4M images. As a component of curating this data, we present a novel model able to classify if an image is stylistic. We use BBST-4M to improve and measure the generalization of NeAT across a huge variety of styles. Not only does NeAT offer state-of-the-art quality and generalization, it is designed and trained for fast inference at high resolution.
PARASOL: Parametric Style Control for Diffusion Image Synthesis
2023 | Under Review - Gemma Canet Tarrés,  Dan Ruta, Tu Bui, John Collomosse
We propose PARASOL, a multi-modal synthesis model that enables disentangled, parametric control of the visual style of the image by jointly conditioning synthesis on both content and a fine-grained visual style embedding. We train a latent diffusion model (LDM) using specific losses for each modality and adapt the classifer-free guidance for encouraging disentangled control over independent content and style modalities at inference time. We leverage auxiliary semantic and style-based search to create training triplets for supervision of the LDM, ensuring complementarity of content and style cues. PARASOL shows promise for enabling nuanced control over visual style in diffusion models for image creation and stylization, as well as generative search where text-based search results may be adapted to more closely match user intent by interpolating both content and style descriptors.
HyperNST: Hyper-Networks for Neural Style Transfer
2022 | ECCV VisArt -  Dan Ruta, Andrew Gilbert, Saeid Motiian, Baldo Faieta, Zhe Lin, John Collomosse
We present HyperNST; a neural style transfer (NST) technique for the artistic stylization of images, based on Hyper-networks and the StyleGAN2 architecture. Our contribution is a novel method for inducing style transfer parameterized by a metric space, pre-trained for style-based visual search (SBVS). We show for the first time that such space may be used to drive NST, enabling the application and interpolation of styles from an SBVS system. The technical contribution is a hyper-network that predicts weight updates to a StyleGAN2 pre-trained over a diverse gamut of artistic content (portraits), tailoring the style parameterization on a per-region basis using a semantic map of the facial regions. We show HyperNST to exceed state of the art in content preservation for our stylized content while retaining good style transfer performance.
StyleBabel: Artistic Style Tagging and Captioning
2022 | ECCV -  Dan Ruta, Andrew Gilbert, Pranav Aggarwal, Naveen Marri, Ajinkya Kale, Jo Briggs, Chris Speed, Hailin Jin, Baldo Faieta, Alex Filipkowski, Zhe Lin, John Collomosse
We present StyleBabel, a unique open access dataset of natural language captions and free-form tags describing the artistic style of over 135K digital artworks, collected via a novel participatory method from experts studying at specialist art and design schools. StyleBabel was collected via an iterative method, inspired by `Grounded Theory': a qualitative approach that enables annotation while co-evolving a shared language for fine-grained artistic style attribute description. We demonstrate several downstream tasks for StyleBabel, adapting the recent ALADIN architecture for fine-grained style similarity, to train cross-modal embeddings for: 1) free-form tag generation; 2) natural language description of artistic style; 3) fine-grained text search of style. To do so, we extend ALADIN with recent advances in Visual Transformer (ViT) and cross-modal representation learning, achieving a state of the art accuracy in fine-grained style retrieval.
ALADIN: All Layer Adaptive Instance Normalization for Fine-grained Style Similarity
2021 | ICCV -  Dan Ruta, Saeid Motiian, Baldo Faieta, Zhe Lin, Hailin Jin, Alex Filipkowski, Andrew Gilbert, John Collomosse
We present ALADIN (All Layer AdaIN); a novel architecture for searching images based on the similarity of their artistic style. Representation learning is critical to visual search, where distance in the learned search embedding reflects image similarity. Learning an embedding that discriminates fine-grained variations in style is hard, due to the difficulty of defining and labelling style. ALADIN takes a weakly supervised approach to learning a representation for fine-grained style similarity of digital artworks, leveraging BAM-FG, a novel large-scale dataset of user generated content groupings gathered from the web. ALADIN sets a new state of the art accuracy for style-based visual search over both coarse labelled style data (BAM) and BAM-FG; a new 2.62 million image dataset of 310,000 fine-grained style groupings also contributed by this work.
Learning Structural Similarity of User Interface Layouts using Graph Networks
2020 | ECCV - Dipu Manandhar,  Dan Ruta, John Collomosse
We propose a novel representation learning technique for measuring the similarity of user interface designs. A triplet network is used to learn a search embedding for layout similarity, with a hybrid encoderdecoder backbone comprising a graph convolutional network (GCN) and convolutional decoder (CNN). The properties of interface components and their spatial relationships are encoded via a graph which also models the containment (nesting) relationships of interface components. We supervise the training of a dual reconstruction and pair-wise loss using an auxiliary measure of layout similarity based on intersection over union (IoU) distance. The resulting embedding is shown to exceed state of the art performance for visual search of user interface layouts over the public Rico dataset, and an auto-annotated dataset of interface layouts collected from the web. We release the codes and dataset.
WebSight: Using AR and WebGL shaders to assist the visually impaired
2018 | ACM Web Conference, Web4All -  Dan Ruta, Louis Jordan, Tom James Fox, Rich Boakes
With growing browser performance and technological advances such as WebVR, WebAssembly and WebGL, opportunities of novel assistive applications of technology are at an all time high. With about 4% of the world's population being visually impaired, easy real world navigation and path-finding are unsolved problems. Tasks like simple navigation across a room, or walking down a street pose real dangers, and current technology based solutions are too inaccessible, or difficult to use, hindering their effectiveness. Keeping portability and compatibility in mind, a browser based system was implemented, which makes use of high performance WebGL shaders to augment a video feed of a user's surroundings. A range of highly configurable shaders, such as edge detection and colour inversion allow a user to adjust the effect to their specific needs and preferences. The effect is rendered into a VR format, to allow users to make use of it with a minimal learning curve, and the web based platform keeps the system accessible to anyone with a smartphone, without incompatibility issues.
Other Projects
jsNet
Javascript based deep learning library for MLPs and convolutional neural networks
Javascript, WebAssembly
jsNet - Github jsNet - NPM
WebSight
Browser based real time VR video pass-through augmentation via WebGL shaders such as edge detection, to aid the visually impaired.
Javascript, WebGL, Three.js
WebSight WebSight - Github WebSight - ACM
xVA Synth
🗕
🗖
xVA Synth
Machine learning based speech synthesis app, with voices from specific characters from Bethesda games
Javascript, Python, PyTorch, Electron
jsNet - Github

IMGArrays
Compression algorithm for numerical lists. Output is encoded as an image, and usually beats gzip. Browser or Nodejs
Javascript
IMGArrays - Medium IMGArrays - Github
VR Scribble
An experimental browser based 2D 360 multiplayer virtual reality painting app, using 2 phones, WebSockets and WebVR.
Javascript, Pug (Jade), Nodejs, Three.js
VR Scribble VR Scribble
Articles