GetMyPOV is an app for iPhone (the name is for "Get my Point Of View"). The app allows a smartphone to transmit the output of its cameras to another smartphone and allows this second smartphone to control the way the local video capture and forwarding activities take place. The video capture by the device, if desired, feeds a proprietary motion detection algorithm that interrupts the further processing and the forwarding of the frames until the next detected motion event. The video streaming can take place in both the directions, even at the same time.
With version 3, the communication can take place even through the Internet, provided that a public-IP router is available for addressing one of the two sides: a smartphone "A" attending to the Wi-fi network provided by the public-IP router can get and control the output of the cameras of a smartphone "B" connected to the internet in its preferred way, and/or the smartphone "B" can get and control the output of the cameras of "A".
The receiving device can simply view the real-time video streaming coming from the other side (no audio is provided), freeze and/or save specific frames of the scene that is being received and record the video streaming.
The receiving device can "control" the most part of attributes of the remote side: which camera (front/back) to use, whether to use the flash light, which frame-format among those provided by the cameras hardware, the fine tuning of the portion of the video capturing surface used for processing and forwarding (remote zoom), the suspension of the whole camera activity, that occurs, moreover, in a natural way on disconnection. Then, depending on the networking scenario adopted or on the particular app of the GetMyPOV family, also the quality of frames (due to compression), the rate of use in frames forwarding and the activation/deactivation of the motion detection are controllable features.
The receiving side and the emitting side keep track of the frame traffic in which they are involved and present related data at the top of the screen. The receiving side, in addition, is informed about the remote battery charge status.
Recording can be tuned by picking up only a percentage of the received frames so that to slow down the rate of grow of the storage space occupation and can be set to segment files basing on a maximum size specified by the user.
Every frames collection can be viewed at a later time, either frame-by-frame (by tapping) or along its natural capture timing, with the use of an internal viewer that can do zoom actions both on single frame viewing and during video playing (the viewer lets you also pick a frame and save it as jpg file - each frame is identified by a native progressive number).
Secondarily every frames collection can be used to build a final mp4 file, becoming this way sharable with the external world. Also the jpg files coming from possible screen-shots are sharable (a frames collection, as such, can be transferred only to another device running the app too, by means of proprietary get/send commands).
Both remote side of the app and the local one work together to keep (by default) the observed scene always oriented as the human horizon, in order to make the understanding of "what happens" an easy matter, in every situation.
Beyond auto-discovery activity the app supports explicit network addressing that allows keeping track of already listening devices that could have their own camera session ready to be turned on (on demand), and to connect to them one at a time, in order to get and control the output of their cameras.
In version 3 the attitude to be (remotely) controlled by another peer becomes a configurable attribute and is enabled by default.
With version 3, the communication can take place even through the Internet, provided that a public-IP router is available for addressing one of the two sides: a smartphone "A" attending to the Wi-fi network provided by the public-IP router can get and control the output of the cameras of a smartphone "B" connected to the internet in its preferred way, and/or the smartphone "B" can get and control the output of the cameras of "A".
The receiving device can simply view the real-time video streaming coming from the other side (no audio is provided), freeze and/or save specific frames of the scene that is being received and record the video streaming.
The receiving device can "control" the most part of attributes of the remote side: which camera (front/back) to use, whether to use the flash light, which frame-format among those provided by the cameras hardware, the fine tuning of the portion of the video capturing surface used for processing and forwarding (remote zoom), the suspension of the whole camera activity, that occurs, moreover, in a natural way on disconnection. Then, depending on the networking scenario adopted or on the particular app of the GetMyPOV family, also the quality of frames (due to compression), the rate of use in frames forwarding and the activation/deactivation of the motion detection are controllable features.
The receiving side and the emitting side keep track of the frame traffic in which they are involved and present related data at the top of the screen. The receiving side, in addition, is informed about the remote battery charge status.
Recording can be tuned by picking up only a percentage of the received frames so that to slow down the rate of grow of the storage space occupation and can be set to segment files basing on a maximum size specified by the user.
Every frames collection can be viewed at a later time, either frame-by-frame (by tapping) or along its natural capture timing, with the use of an internal viewer that can do zoom actions both on single frame viewing and during video playing (the viewer lets you also pick a frame and save it as jpg file - each frame is identified by a native progressive number).
Secondarily every frames collection can be used to build a final mp4 file, becoming this way sharable with the external world. Also the jpg files coming from possible screen-shots are sharable (a frames collection, as such, can be transferred only to another device running the app too, by means of proprietary get/send commands).
Both remote side of the app and the local one work together to keep (by default) the observed scene always oriented as the human horizon, in order to make the understanding of "what happens" an easy matter, in every situation.
Beyond auto-discovery activity the app supports explicit network addressing that allows keeping track of already listening devices that could have their own camera session ready to be turned on (on demand), and to connect to them one at a time, in order to get and control the output of their cameras.
In version 3 the attitude to be (remotely) controlled by another peer becomes a configurable attribute and is enabled by default.
Show More