Author Topic: Checking in  (Read 1175 times)

Offline jchuchla

  • Sr. Member
  • ****
  • Join Date: Jul 2014
  • Location:
  • Posts: 290
  • Kudos: 0
Checking in
« on: December 07, 2016, 03:52:30 PM »
Just checking in to let you know the V3 team got the invitation and the key players are subscribed to this board.
We've got pretty much the same needs as Keith reported for xlights. Though we  organize our objects differently so there may be some slightly different calls requested for some things. We'll get more in depth as we figure it out.

Just a few high level questions:

Are there plans to implement anything in the API for syncing and media cueing?

Are there any plans on flattening the channel config and making all output methods more uniformly defined?

Will there be any awareness added for higher level prop objects  (vixen elements/ xlights models) that can be integrated for the purposes of display testing and external prop based live input or triggering?


Sent from my iPhone using Tapatalk

Offline CaptainMurdoch

  • Administrator
  • *****
  • Join Date: Sep 2013
  • Location: Washington
  • Posts: 9,012
  • Kudos: 179
Re: Checking in
« Reply #1 on: December 08, 2016, 08:40:55 AM »
I would like to add the ability to have FPP start/stop a sequence or media file on demand, but I don't know if a HTTP API is the right way to sync that with an external source while playing.  Is this what other systems use or do they use a proprietary protocol like FPP's current MultiSync packets?

The goal is to get all channel outputs defined in one place.  Currently we are in the middle of a C -> C++ conversion on the Channel Output code, the C outputs use the CSV file and the C++ classes use the channeloutputs.json file.  Then there is the E1.31/Artnet and FPD outputs which have their own files and are also written in C.  We want all of these to be in a single place in FPP v2.0.  For E1.31/ArtNet, we also want to separate out the input vs output universe config.  I have plans to create a Channel Input framework similar to FPP's Channel Output framework and E1.31 (possibly ArtNet) would be one of those inputs.  So, if we keep the JSON files, there would be a channelinputs.json file to go along with the channeloutputs.json.

FPP already has the concept of Pixel Overlay models and xLights can already export the channelmemorymaps file that is used to configure these.  I can go into the FPP Channel Test screen and select one of my living room windows from a dropdown model list and turn on testing for that single model.  Currently the Pixel Overlay feature is limited to contiguous channels and it only really understands blocks of channels and matrices, but I do have plans to modify this to support non-contiguous channels.  Since these are configured as Pixel Overlay models in FPP, that means you can turn them on/off and set channel values from the command line using the fppmm utility.  The new playlist code actually lets you turn on/off these models and set channel values from within a playlist rather than having to rely on external scripts called from events.  I would also like to tie this in sometime with the channel data like the event control channels, but I haven't given that much thought yet.   Currently the only way to trigger an action with a model is to use an event and script which calls fppmm or some other utility that can talk to the Pixel Overlay memory mapped files such as the Perl library which is used by several plugins to put text up on matrices.
-
Chris

Offline jeffu231

  • Newbie
  • *
  • Join Date: Sep 2014
  • Location:
  • Posts: 2
  • Kudos: 0
Re: Checking in
« Reply #2 on: December 08, 2016, 07:48:10 PM »
It seems to me for some of the real time interactions something like the multi sync api would probably be nice to use. I actually had ideas of building my own sign using a Pi to display dynamic web pages that would be constructed based on input from what is playing and other triggers or data I want to mash up into the display. I was thinking of trying to use the multi sync type stuff to make my device look like another FPP on the network and be able to react in real time to the current sequence playing. I did not get far enough to see what is actually exchanged in that protocol and if it would be enough for what I needed or if I would still need to use some of the other event hooks instead.


In that same light if the user is using a sequencer to run some aspects of their show and FPP to run other things like video or panels then this same protocol could be used for them to talk as well. There may be times when something like a 3rd party app is the master and the FPP devices become slaves.

Offline jchuchla

  • Sr. Member
  • ****
  • Join Date: Jul 2014
  • Location:
  • Posts: 290
  • Kudos: 0
Re: Checking in
« Reply #3 on: December 29, 2016, 10:27:17 PM »
I would like to add the ability to have FPP start/stop a sequence or media file on demand, but I don't know if a HTTP API is the right way to sync that with an external source while playing.  Is this what other systems use or do they use a proprietary protocol like FPP's current MultiSync packets?

The goal is to get all channel outputs defined in one place.  Currently we are in the middle of a C -> C++ conversion on the Channel Output code, the C outputs use the CSV file and the C++ classes use the channeloutputs.json file.  Then there is the E1.31/Artnet and FPD outputs which have their own files and are also written in C.  We want all of these to be in a single place in FPP v2.0.  For E1.31/ArtNet, we also want to separate out the input vs output universe config.  I have plans to create a Channel Input framework similar to FPP's Channel Output framework and E1.31 (possibly ArtNet) would be one of those inputs.  So, if we keep the JSON files, there would be a channelinputs.json file to go along with the channeloutputs.json.

FPP already has the concept of Pixel Overlay models and xLights can already export the channelmemorymaps file that is used to configure these.  I can go into the FPP Channel Test screen and select one of my living room windows from a dropdown model list and turn on testing for that single model.  Currently the Pixel Overlay feature is limited to contiguous channels and it only really understands blocks of channels and matrices, but I do have plans to modify this to support non-contiguous channels.  Since these are configured as Pixel Overlay models in FPP, that means you can turn them on/off and set channel values from the command line using the fppmm utility.  The new playlist code actually lets you turn on/off these models and set channel values from within a playlist rather than having to rely on external scripts called from events.  I would also like to tie this in sometime with the channel data like the event control channels, but I haven't given that much thought yet.   Currently the only way to trigger an action with a model is to use an event and script which calls fppmm or some other utility that can talk to the Pixel Overlay memory mapped files such as the Perl library which is used by several plugins to put text up on matrices.
Evidentially I don't have my notifications set up properly on this forum. I completely missed this response.
I agree that the http API is not the best place for real time syncing or media cueing.   The way it's done in the pro software is almost exactly the same as your event system with in-band channel values acting as triggers for play/stop/pause events for media clips. It's often done with 2-4 channels for the media index, and another channel for the transport control command. Sometimes there will be a few more channels used as parameters for the transport command. This would be for a transport command like as "play from" and the start time is contained in the next several channels. There's no standard for the channels used. They mostly work similarly, but the channels and values for commands vary for each manufacturer's system.
Other systems use similar concepts but use midi (specifically MMC) as the control mechanism. This is sometimes preferred because you can send more complex data like a filename in a simple message as opposed to encoding it into dmx channels or just using index numbers. I don't think this is the best solution for our community though.
I think that if the FPP file format could support metadata that defines media files used in the sequence and assigns references to them, then the in-band channel value method can call those events. It's kinda like having the event setup embedded into the sequence file.

This also fits well with Jeff's ideas about having the multisync being more universal. As you know, I'm running my sign with a variation of this concept. If you proceed with the work that allows for separate input and output channels, this would (I think) eliminate the need for bridge mode and could allow the FPP to accept control  commands from one universe while still driving data or running a schedule.


Sent from my iPhone using Tapatalk

 

Back to top