The Rig Image
Download File === https://bltlly.com/2tkITg
Identifying the root cause of damage of a pulled bit as soon as possible will aid preparation for future bit runs. Today, such bit damage analyses are often anecdotal, subjective and error-prone. The objective of this project was to develop a software algorithm to automatically analyze 2D bit images taken at the rig site, and to quickly identify the root cause of bit damage and failure.
A labelled dataset was first created whereby the damage seen in bit photos was associated with the appropriate root cause of failure. Particular attention was given to the radial position of the cutters that were damaged. Using the 2D bit images (which can be obtained at the rig site), a convolutional neural network along with other image processing techniques were used to identify the individual cutters, their position on the bit, the degree of wear on each cutter. A classifier was then built to directly identify root cause of failure from these images.
This work utilized a large dataset of wells which included multiple bit images, surface sensor data, downhole vibration data, and offset well rock strength information. This dataset helped relate the type of dysfunction as seen in the downhole and surface sensor data to the damage seen on the bit. This dataset however only covered some types of dysfunctions and some types of bit damage. It was therefore augmented with bit images for which the type of failure was determined through analysis by a subject- matter expert. A classifier was subsequently developed which properly identified the root causes of failure when the bit photo quality met certain minimum standards. One key observation was that bit images are not always captured appropriately, and this reduces the accuracy of the method.
The automated forensics approach to Polycrystalline Diamond Compact (PDC) bit damage root cause analysis described in this paper can be performed using 2D bit photos that can be easily captured on a phone or camera at the rig site. By identifying the potential root causes of PDC damage through image processing, drilling parameters and bit selection can be optimized to prolong future bit life. The algorithm also enables uniformity in bit analysis across a company's operations, as well as the standardization of the process.
The camera is in bulb mode and pre-focused on the intersection of the laser beams. StopShot holds the camera shutter open and the high speed shutter closed until and insect crosses the beam. This kicks off the capture process. StopShot first turns of the laser sensors so the red dots from the laser don't show up in your photographs. It then opens the high speed shutter, this in turn fires the flashes exposing the image. StopShot then closes the high speed shutter and refreshes the frame to get everything ready for the next shot. The whole process takes just over a half of a second. The system has a response time of about 6mS (6/1000 sec). This is the amount of time it takes between the insect crossing the beam and the image being exposed by the flashes. This is compared to an average response time of a DSLR of about 50mS (and that is being very generous).
There are a couple of methods to capturing insects in flight. The first is to bring the insect rig to the insects, this works well for bees or other hovering insects that have relatively predictable paths. This method is a lot of fun but takes some practice. To capture insects this way you will want to get the Insect Rig Strap Kit. This kit puts the weight of the kit on your pelvis and neck leaving your hands available to steer the rig to the insects and enable and disable the system. The image to the left was captured with this method.
This method also works very well to capture insects that are resting. You can walk up to them and drop the laser sensors right over them. The system will capture the image when the laser beam is broken by the subject. I have had a much higher hit rate of in focus images doing this over using the traditional AF of the camera. Due to the high speed methods employed using the insect rig and the high speed shutter makes capturing photos this way less susceptible to wind and other subject movement.
Since the photograph will be exposed by the high speed shutter and more importantly by the short duration of the light from the flashes you will need to make certain your camera is in bulb mode and StopShot is holding the camera shutter open. You can find all of the StopShot settings here. Put the camera in manual focus and adjust the focus to be on the intersection of the cross beam laser sensors on the rig. Below you can find an equipment list for the image taken above as well as the camera settings.
When attracting insects at night there is very little ambient light to create ghosting problems. The only ambient light you need to worry about is the light used to attract the insects. For this setup there is a 160W bulb but I have also used a 40W black light with good success. Unless you go crazy with the power of your light used for attraction the settings above will work fine. If however you plan to chase insects around in bright day light or sunshine you will need to stop the camera down considerably more. This also means you will have to increase the flash power to compensate. If you use the settings above in bright sunlight you will see ghosting in your images.
The image of an Ichneumon Wasp to the left is an un-cropped shot from the full frame D600 to give you an example of the field of view you will get with a 105 mm lens and a focal length of 0.4m. The wasp itself is about 10 mm long (not including the antennae). Also notice the slight vignetting in the corners of the image. This is due to the high speed shutter. You can find more information about vignetting and the high speed shutter here. The vignetting could be further minimized by shortening the focal length some.
Pictured above is the setup used for capturing insects drawn to the light. The plant in the image was used as the background for the photograph. You can see a 580EX flash laying on the glass table, this flash is used to illuminate the background. The flash is wired into the insect rig using a RCA Y-cable and a PC to RCA adapter. Adding more flashes is easily done with the rig.
Stereoscopic image recording is a relatively new ground of modern cinematography. The market for the stereoscopic motion pictures is now growing, which is caused by the recent popularization of stereoscopic displays and development of related standards of video transmission and storage. Although displaying a stereoscopic material has become quite easy, recording of such is still a complex task.
The rig calibration is usually performed by aligning images obtained from these preview data streams while filming one or several boards with dedicated pattern of lines and other alignment markers [4]. Complete calibration of the stereoscopic setup includes: camera roll, pitch and translation compensation, adjustment of lens settings, color space equalization (mainly for mirror rigs) and finally applying the desired stereo base and convergence settings.
The legacy method of calibration, that the authors have observed in practice, involves combining or switching the video streams with Matrox MC-100 multiplexer and supplying them to a high-resolution display. The images on the screen are then compared by manually analyzing relationships between the observed geometries. One of the goals of the calibration is to obtain the vertical disparity not larger than a single line. The complete set-up process can easily consume several hours.
The calibration time could be significantly reduced by supporting the operator with semi- or fully automated analysis of the video preview streams. One of such solutions is STAN, the stereoscopic analyzer prepared by Fraunhofer Institute [1]. It is a computer application suite capable of performing a wide range of image analyses and calculations on stereo-pair images. Our goal is to provide similar functionality with a more compact and energy-efficient device, which would connect to both cameras and provide analyzed image as well as additional pass-through signals. The following chapters focus on the FPGA firmware of the first prototype of such solution.
The FPGA processing module cooperates with GateWorks Ventana GW5400 SBC. The module is based on a quad-core ARM Cortex-A9 processor running at the frequency of 1 GHz. The processor board contains both the HDMI output and input interfaces which enables it to simultaneously generate 1080p60 and capture the 1080p30 video. The HDMI output is used for generation of the overlay, which is composed in the FPGA into an OSD. The HDMI input enables capturing the image being the result of the analysis and streaming it over the Ethernet. More details on the SBC firmware are presented in [7].
The block diagram of the control system is shown in Fig. 5. It is governed by a Microblaze processor core, coupled with 128 kB of the BlockRAM memory. This memory stores the processor executable code as well as its run-time variables. It is a common practice in Xilinx FPGAs to store the application in the RAM memory. The memory is preloaded with machine code during the FPGA boot process. The processor runs at the frequency of 100 MHz. It does not take active part in the image processing, only schedules the transfers.
Next the video stream is received by the Xilinx Video DMA (VDMA). This component implements image buffer of three frames with dynamic GenLock synchronization. It offers seamless adaptation between input and output frame rate. The frames are either repeated or skipped automatically when needed [9]. When one channel of this DMA operates on one frame buffer, the other is forbidden from accessing it, guaranteeing that only complete frames are passed through.
After the sub-sampling, the data enter the diagnostic block. This custom IP core calculates the image width and height, basing on synchronization signals. It is also capable of checking if all the image lines had the same width, which may not occur when the SDI signal is corrupted (e.g. due to wrong signal distribution topology or improper termination). The data are also observed by the Color Data Collector, which calculates the average color for nine areas of the image, defined by dividing the frame into three rows and three columns. The rows and columns boundaries are adjustable in the run-time. 59ce067264
https://www.stevenscreekshetlands.com/forum/general-discussions/download-ssh-slow-dns-v52-txt