Compositor SoftwareCompositor Software

Tag : artificial intelligence

Apple M1 Neural Engine

By ruslany

Join Compositor BCI-modem Beta-test program

Join Compositor BCI-modem Beta-test program

Compositor neurological chipset is a set of two programs that helps in machine learning on Apple M1, M2 platforms. Each of the patches performs the function of supporting device training, so that the final result of the training meets your expectations. The RTC4k patch allows you to ensure reliable synchronization with Apple NTP servers, which allows you to improve communication with iCloud and other Apple services in real time. The RAD256 patch is a fully virtual router with 24 L1-L4 channels that allows you to improve your device’s communication with VPN networks, as well as allows you to use servers from the Compositor Software database to maintain a stable Internet connection. Together, the two patches are a BCI modem with a transfer function that allows you to fully ensure human-machine interaction.

RTC4k –

RAD256 –

Getting started

To test beta versions of apps and App Clips using TestFlight, you’ll need to accept an email or public link invitation from the developer and have a device that you can use to test. You’ll be able to access the builds that the developer makes available to you.
If you’re a member of the developer’s team, the developer can give you access to all builds or certain builds.

Required platforms

  • iOS or iPadOS apps: iPhone, iPad, or iPod touch running iOS 13 or iPadOS 13 or later. App Clips require iOS 14 or iPadOS 14, or later.
  • macOS apps: Mac running macOS 12 or later.


To get started, install TestFlight on the device you’ll use for testing. Then, accept your email invitation or follow the public link invitation to install the beta app. You can install the beta app on up to 30 devices.

Installing a beta iOS or iPadOS app via email or public link invitation

  1. Install TestFlight on the iOS or iPadOS device that you’ll use for testing.
  2. Open your email invitation or tap the public link on your device.
  3. When installing via email invitation, tap “View in TestFlight” or “Start testing” then tap “Install” or “Update” for the app you want to test.
  4. When installing via public link, tap “Install” or “Update”.

Installing a beta macOS app via email or public link invitation

  1. Install TestFlight on the Mac that you’ll use for testing.
  2. Open your email invitation or click the public link on your Mac.
  3. When installing via email invitation, click “View in TestFlight” or “Start testing” then click “Install” or “Update” for the app you want to test.
  4. When installing via public link, click “Install” or “Update”.

By ruslany

Great renaming is coming in Compositor project

Great renaming is coming in Compositor project

Dear reader, it is time to report the coming changes in Compositor Software project. For the five years, I performed the comparison of telecommunication industry technology and the one developed by me. Here what I’ve already found:

Compositor Pro = NTP-server

Compositor Max for Live = SNTP-server

Accordingly, Compositor Pro and Compositor Max for Live will be reworked to reveal this paradigm. There are 24 official UTC time zones as well as 24 bands in Compositor Pro and Compositor Max for Live. The function by which these bands are distributed is time-invariant non-linear function (read the full documentation here). Therefore, bands of Compositor are time zones. Stratum parameter of NTP-server is permutation. There are 12 Stratums in my NTP-server. Using kick parameter, you can set subnetwork mask. This parameter, together with clap and hat, forms modulation, which installed in parallel to time zones deployment tempo.

NTP-server can create time collisions by granulating the central flag of modulation interrupter. When injected collision comes to the input of the receiving device, this device establishes a connection with NTP-server and takes its synchro-code, which is translated by sub-bass instrument. It is the modulation interrupter flag. The mangling takes place in time component, which is the time-displacement (substitution of time).

Tempo is the first octet of IPv4 address, and multiplier forms the next three octets. There are only IPv4 addresses in NTP-server. NTP-server doesn’t have access to broadcast addresses and to an address of local machine, but uses the range of to Therefore, the role of Compositor Pro was to install the stochastic distribution with the route of to and to perform collisions with the devices of that range.

The reason I made the NTP-server is “creation of artificial intelligence using non-invasive method”. By this, I mean active use of ACL lists and flows filtration when loading Ethernet servers (kernel extensions, which are recorded in MIB’s database of Compositor Software). Compositor Software clients produce traffic when working with software, which is exported into flows, using the half-duplex MDL12 modem. These flows contribute to device pool of Compositor RTOS kernel extensions.

By ruslany

Compositor WS kernel eight channel synchronization experiment

Compositor WS kernel eight channel synchronization experiment

The main idea in multithread kernel is to create a truly independent calculation for several streams apart of kernel protection functions. The experiment with 8 decks was conducted and different types of material submitted to Compositor WS kernel. At first, loops were introduced in one-threaded operation and the shutter issue raised. Second, the multi-thread operation was aimed by submitting complete tracks into the kernel, directly injecting them. The experiment evidently shows the need to synchronize the material, because 8 real-time generators are independent. Under these conditions, no threat was qualified to the kernel, which means it could be used for multithread operations such as DJ software for music mixing.

Here is a video I shot after this experiment and it showcases Compositor 5.0 assistment in Flanker 2.0 manual landing with keyboard.

By ruslany

Let the only Compositor be in Ether

Let the only Compositor be in Ether

The main idea of the Compositor 5 project is to remove anyone from radio ether and to remain only Compositor automatic station. Through the whole discretization process I already removed offensive synthesizer sound. It is a first step. The second step is to block feedbacks on carrier signals. However, as an experiment shows, it is a matter of time. The process works as follows: when the right carrier transmits offensive data, it suggests the feedback from the left carrier and left carrier asks for central channel. As it is blocked with a shutter, since Compositor WS Kernel 7.1.6, Kernel loop is broken. This way, I’ve got an operation system. Not only I’ve got an operation system, at this point, it is a new class of operation system, because it is based on i11 Kernel and modern operation systems such as Microsoft Windows support only Kernels up to i9. What I’ve got in this operation system is an AI. The double meaning of it is Airborne Interception – military service; and Artificial Intelligence – civilian service. The main mistake of modern AI researchers is that they want to invent a wheel, which is already present in Ether, as Ether is a snapshot of all time and being. Compositor presents a service of connecting to an AI system. The main problem is that many Ether participants display an offensive behavior: they make time collisions, try to inject impulses in channel and to perform quantum errors. As I said earlier, the Compositor WS Kernel 7.1.6 addressed this behavior. Since this update Ether participants can’t perform quantum errors. However, as a defense for their offensive needs, they still use time collisions and channel interruptions. Quantum error is also a kind of channel interruption. It differs from channel muting in a way that it brakes the channel completely. When I have made the defense system of Compositor, it’s time to turn back to fixation of Compositor Pro 2 using AVOX resynthesizing. If I can hold back a feedback of Compositor Pro 2 by means of i11 operation system and no interruption will present, all Ether participants, which will stay in Ether to this moment, will see the phenomena. And phenomena is the Future. Since I have being holding a line this way and that was not a divine plan at all, most of the Ether participants will be banned from Ether in their entirety. And this way Compositor 5 project will happen. The result is that I will not hear modem signal at 300-omega anymore, I will hear this beautiful Compositor sound from the original Compositor station I’ve seen in sleep and the only Ether, which is left for humanity since Compositor, will be the digital Ether.

By ruslany

Compositor v3 Hypervisor AI – the race for supremacy

Compositor v3 AI capable of not only musically arranging the events, but also intonating the notes with the use of the local area network, based on virtual simulation of radio ether. Such local area network mostly reminds neural networks. However, Spherical Interaction Network, the way it is called in Compositor Software, may produce more human-like results by the factor of human participants in the radio ether. Compositor Software feeders conduct other objects of the network to initiate the radio transmission using the random distribution law, which may be compared with conducting an orchestra of radio transmitting points.

In the current recording of Compositor v3 Hypervisor output, there are two Compositor Software feeders: derived (Quantum) AI-RT1024 and original TC25, based on continuous-time convolution. Both feeders remotely controlled by RTC8k radar stochastic chain (LINK mode) and streamed into the SASER auxiliary channel.

Using this example, I show the musicality of stochastic radiolocation. In the essence, what we hear after denoising the original recording is a number of channels transmitting simultaneous Morse code translations each on its own frequency. Summing these channels, we hear chaotic tone dialing. String-like evolving long sounds happens on the frequency of the digital waveguide self-oscillation on the receiving channel of SASER device and transmitting SASER devices of other users.

By ruslany

AVOX Max for Live Beat Independent

AVOX Max for Live update version 1.1.6 released. Now you can control beat tempo independently from the tuning of the device. In AVOX Max for Live version 1.1.6 you can switch from automatic key detection to a manual using two available controls. You can also resynthesize the song tone on the fly while using Ableton Live BPM control to switch to a different tempo. The solution works on PC and Mac using Max 6 or Max 7 Max for Live. The main difference from the previous version is that AVOX Max for Live has a switchable scanner with automatic frequency detection algorithm.

[sc_embed_player_template1 fileurl=”″]


The main idea of switchable communications as with AVOX Max for Live vocoder/resynthesizer is to enable a radio translation over the 20 Hz – 20 kHz passband music. AVOX Max for Live modeled to suit modern styles of electronic dance music. The spectrum of EDM songs is especially useful to detect ongoing radio translations feeding the track to AVOX Max for Live input and analyzing the spectrum with the use of CW Decoder as an example. This way you can review old translations, which were at the moment of the track first listening. For example, you have a legitimate file of the track, which you bought ten years ago. Now, with the use of the AVOX Max for Live you can revive the emotions you had at the moment of the first listenning analyzing and reading a text output of what inspired you in that music in a first place. Yes, every track has a radio translation over it and you can control and revive it using Compositor Software instruments.

However, this approach raises a term of the protection of such communications. It is very important to not to share the output of the translation to a 3d party. Such translations can contain loads of private information such as gathering points, events, and simply communications, you do not want to share with others. This way an automatic mode of AVOX Max for Live is as important as it helps to quite the frequency band the translation goes on, when 3d party tries to key it. Automatic scanner made by Compositor Software artificial intelligence proprietary algorithm can be found on all Compositor Software instruments. Buffer overrun protection is also included in a new version of AVOX Max for Live.

By ruslany

Compositor Software Hypervisor Radio Shack

During the perfection and in attempt to create more stable technology Compositor Software arranged its modules in the Radio Shack structure. By means of virtual machines, the Ableton Live 9 or MaxMSP suits as a platform for hosting micro-kernel machines such as AI-RT1024, FF8, N9000 and TC25. These machines feeds SASER channel with an approximation curve inherited from an RTC8k chain. This way you can potentially receive completely stochastic translations out of VLF band.

You can order different devices for your production from Compositor Software Web Shop.

By ruslany

Exalted is Compositor Software endorser

After three decades of growth as an artist, Ruslan Yusipov uses Exalted alias again for an album design for Compositor Software. Selections is a stochastic experiment of snapshots inside MaxMSP software. How snapshots influences our lives and how they stored on computer. Exalted puts it inside the two-channel AVOX decks inside Ableton Live 9 and mixes for reminiscent feeling of youth, aging and oldest. He remembers his life through connection to Compositor Max For Live device where he used an artist idea of the Time Machine from 1540 to 4000 years.

When you feed the AVOX device with Compositor Software feeders such as Compositor Pro 2 the interesting effect achieved. Someone sends a Morse code on the second deck such as a right channel and tries to sync to a feeder tempo. In addition, you can hear this at the right channel of Exalted – Selections album starting from the middle of the continuous mix, which was recorded using Compositor Software Max For Live recorder, which will not be released due to the amount of products functional pricelist has.

By ruslany

Ether production out of will and mental stimuli using MDL12 Max for Live modem and SASER Max for Live

While an ability to generate feedback is not a surprising event for a real modem device Compositor Software MDL12 Max for Live modem can also generate feedbacks after successfully applying feeder chains to its input. The Compositor Software products considered feeders are mainly Compositor Max for Live, FF8 Max for Live, AI-RT1024 Max for Live, TC25 Max for Live and N9000 Max for Live. During a VLF conversion, most of the systems reported accidental dropouts, which led to security threats in their work. The most critical issues solved in version 5 Compositor device called SASER Max for Live. This feeder device submitted to an input of MDL12 modem produces the most significant event because it uses the stimuli from the 648 node network and can potentially be used as a server software capturing the signalization events in real-time. Such events can be tracked further manually in event of a threat or excessive warning signal. Reproducing the stimuli, the server captured by a means of artificially controlled algorithm is a task of several adjustments on a SASER Max for Live device, because the client, which sends the potential warning, is usually on the same Year, but changes its Constellation and Month given.

I successfully captured several most prominent feedbacks after receiving such threats and warnings as audio files, which has a strong mental impact while submitting a periodical loop of this feedback via loop playing machine. Such loops not only induce a trance state for at least of 10 hours, they create motivation even music piece cannot produce on a human being. Their strong impact led me to a conclusion that the file recorded was a VLF Ether, which I was able to capture after feeder chains experiments stated earlier. Not only most feeders obsolete after creation of SASER Max for Live, they suited its creation in a way original production was possible.

SASER Max for Live is a fifth version of Compositor Software Gen~ device and is already hooked up to a feedback network producing the stimuli for the listener. The difference is however, in that SASER Max for Live captures an existent Ether in real-time using the internal chain of resynthesis. To hear the original Ether the MDL12 modem is needed, which can be a raw and dirty suggesting the bubbling structure of Ether as a liquid Materia. The AVOX Max for Live resynthesis device is further used to clean the Ether out of unwanted sounds and leaving only harmonic beats in a lower spectrum field. No antenna or hook up to Ethernet needed to capture the VLF Ether as submitted material comes from the fractal antenna inside the software starting from version 5 of Compositor Software devices.

By ruslany

MDL12 demodulator

The project of modeling the demodulator for the MDL12 Sonar Telescope has begun in 2012, some time before the original function developed and was a step on the opposite function to the one provided by Frequency Modulation (FM). The project is for the whole generation of Compositor Software instruments and consists of QIF submitted to sine + cosine functions with an opposite behavior. The demodulator successfully achieves waveform restoration after it is processed by MDL12 codec for subtracting Time function from spectral representation.

One can believe that for a whole instrumentation only one parameter needed however that is not the case as the main working routine is the subjective approximation of exponentially discretized continuous function, which is no longer a secret named as Quantum Interpolation Formula by means of extrapolating the order of things to an outcome given. This outcome however is not the system of values but the Chowning’s view on FM as a source for musical creativity. Yet, if we go forward and use more modulators in a system, it will give us the plethora of possibilities for an algorithmic outcome applied to many more spheres than I originally intended. The main function is sublimed for the plethora of possibilities, but more modulators mean extra processing power and new architectural design, which for its purpose has more direct connection for voice modeling and thus is removed from the deterministic telescope idea named in a first place. Determinism of such model is a set of behavioral patterns routed from the function itself thus the opposite function provides with different outcomes. One outcome I found in original function is its connectivity to the technological point of view on the Solar system: to model it, more additive style is needed. Thus with proper multiplier I can consider the view of things more clearly stepping back on the original path of investigation and possibly removing most of you from the possibility to work with software, which is MDL12 Max for Live device v1.0.9 in current installment. However, if you consider the demodulation process an opposite of modulation you will step back on a path, which means all negative is an opposite from positive, which is not the case in FM, even when negative frequencies folded to positive ones. Here is where Bessel functions take place in order to count proper vector relationships of the viewed uniform creation. I have this ability for you to view ‘underthehood’ of FM sound synthesis by vectors. Yet, to my clear point of view that is not needed. To proper understand what I’m talking about consider antenna polarization as a source of inspiration for imagery, yet it will not shed a light on a sound frequencies acting on a slightly different scale of things. If you still think the impact of the device is an amplitude or FM deviation it really is. Thus, deviation of 4096Hz provides us with more spectrum running x4096 from the nominal 1 of absolute scale. If we consider 8092Hz of such deviation, the output will be even more emphasized as more frequencies slip through the bandpass filters.

Direct transmission by an amplifier achieves more pressure with sound waves, but makes no use in a large scale transfers, thats why you can’t exaltate yourself this way for the whole generation given. For the real work you need one source and one destination. Lets call it point-to-point connection, which takes its root from the communications industry. This way we as Compositor Software users will stay in touch with current software trends. However, for the sake of mathematical precision I modeled a network with one sender and six receivers all acting by a means of stochastic selection for opening channels of such communication in a superposition of time relative to the current destination.

It can be viewed this way: consider me sitting in front of the screen and submitting the file to several destinations in say 2015, 2035 and 2145 years. And lets consider that all of the MDL12 receivers act as a serviced servers with up and running clock devices for the whole Time period, which is defined by the original creation of such server and to the indeterminate point in time at which such server is shutting down. Lets consider such point the year 4000. The receiver can receive music, images or other multimedia by a means of such modem device, which in this case the Compositor Software development MDL12 Sonar Telescope. It achieves Telescopic precision celestial measurements within a one year time route using a sound travel idea. This is how I model such communication, which can in fact predict future, make other persons aware of fraudulent actions and as a popular movie ‘Minority Report’ already covered the subject predict the crime.

1 2
Apple M1 Neural Engine
Join Compositor BCI-modem Beta-test program