406~450 item / All 512 items
Displayed results
Filter by category
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationContact this company
Contact Us OnlineBefore making an inquiry
Download PDF406~450 item / All 512 items
Filter by category
This time, I would like to verify the "differences in learning and inspection speed using GPUs" as stated in the title. I will compare the differences in learning and inspection times with two types of GPUs and 16GB and 8GB of RAM. ■Conditions ■Verification Configuration ■Results *For more details, please see the related link (blog).
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationWe often receive requests to use high-resolution cameras to improve detection capability. In the case of rule-based image processing, using high-resolution images tends to improve resolution and enhance detection capability, but this is not always the case with deep learning. Below is a brief explanation of the perspective on resolution in deep learning, albeit in a rough manner. Let's consider images like (1) to (3) in Figure 1. (1) Total area 10×10, area of the gray rectangle 4 (2) Total area 20×20, area of the gray rectangle 16 (3) Total area 10×10, area of the gray rectangle 16 The area of the gray rectangle in (2) is four times larger than in (1), but when looking at the ratio of the gray rectangle to the total area, (1) is 4/100 and (2) is 16/400, both representing only 4%. In terms of "ease of detecting the gray rectangle" in deep learning, (1) and (2) are almost the same.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationEasyInspector2 has three text recognition features. 1) OCR (Optical Character Recognition) 2) Machine Learning OCR 3) AI OCR ■Simple setup ■Can read characters that were previously difficult to recognize ■Finally *For more details, please see the related link (blog).
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationWe sometimes receive inquiries from customers regarding the specifications of PCs when connecting multiple devices. This time, we would like to introduce the behavior when connecting multiple cameras using the AI function of EasyInspector2 (hereinafter referred to as EI2). We hope this will serve as a reference when selecting a PC. Additionally, we hope it will also provide guidance on the expected increase in inspection time when connecting multiple devices. We used a multi-controller (hereinafter referred to as EIMC) to start and inspect six EI2 units. ■PC ■Hardware Configuration ■Software Configuration ■Results *For more details, please refer to the related links (blog).
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationRecently, we have been receiving more inquiries from customers using AI software (such as EasyInspector2 and DeepSky) asking questions like, "Why does this happen?", "Isn't it supposed to be like this?", and "What is going on inside?". In such cases, it seems that the person in charge is unsure about how to explain things. Indeed, when trying to provide an intuitive and easy-to-understand explanation, they often end up relying on analogies, or if they attempt to explain in detail, they find that specialized books provide more thorough information. I have also been unable to find suitable explanatory materials for customers who want to take a deeper dive into the principles of AI image processing.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registration▼Obtaining Anisakis → Until Filming This time, we conducted a verification at our company to find "Anisakis," which we received several inquiries about from customers, using AI. We started by trying to obtain live Anisakis, but it turned out to be more difficult than we expected... We visited several supermarkets and fish shops in the city and made phone calls to request their help. Despite it being an extremely busy time of year, I made a strange request to unknown housewives, saying, "Can you give me live Anisakis for an experiment to find Anisakis with AI (to summarize)?" I am very grateful to the supermarkets that cooperated with us, and to the staff in the fresh fish section who helped us. Thank you very much. We assembled the equipment assuming an operation where we conduct AI Anisakis inspections on a slowly moving conveyor, and if detected, a buzzer would sound to stop the conveyor. Anisakis glows in response to specific wavelengths of ultraviolet light, so we use UV LED light sources and filters that pass specific wavelengths. In image processing, it is also important to capture images that make it easier to find Anisakis through optical manipulation.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationUntil now, Skylogic has primarily handled image inspection projects focused on industrial products. This is because the nature of industrial products, which involves "mass-producing the same item," matched well with conventional (rule-based) image processing methods. Conversely, it has been difficult to handle items with unstable colors and shapes, even among industrial products, using traditional methods. However, with the emergence of image processing methods utilizing AI (deep learning), items with variability can now be included in the scope of image processing. This is because AI excels at "detecting only what we want to find while ignoring trivial changes." Now, connecting this to the title, when we talk about "items with unstable colors and shapes," we are referring to food products. Issues such as material defects, foreign object contamination, quantity defects, and missing components... even when the subject is food, products produced in factories are likely to face similar challenges as industrial products. However, the difficulty of processing with traditional methods has led to these issues being abandoned. (And I have also given up on them.) Why not try AI?
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationCurrently, I am learning a lot about images, and I realize the importance of "taking good pictures." Cameras come equipped with various functions, and it seems that if you can set them correctly, you can capture good images. One of those settings is "white balance." As someone who was a novice in image processing, I had heard of it but wondered what it actually does. After looking into it, I found it interesting and potentially useful, so I would like to share it. *For more details, please see the related links.*
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationEasyMonitoring, released in 2018, has been renewed alongside the image processing software "EasyInspector," and was re-released last autumn as EasyMonitoring2. Over the past year, we have introduced features that our customers have appreciated and new capabilities that have been added. With the addition of AI features... It has become possible to detect objects even in environments where monitoring was previously difficult due to significant changes. - Detection with stable accuracy even outdoors - Changes in brightness (trained in advance with images from various times of day and patterns) - The positions where objects appear are varied and different each time Each object can be identified and detected, allowing us to determine "where and what" has been detected. - Detection is acceptable or not based on specific areas Objects are detected with a sense close to that of the human eye. - Relative learning of objects with individual differences (animals, insects, agricultural products, foreign substances, etc.) - Detection of areas that cannot be judged solely by color (RGB) variations or roughness.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationThe name "cazoeTell2" has been changed to "wakeTell." The reason for the renaming is that the name "2" often leads customers to mistakenly think it is a superior version of cazoeTell. cazoeTell is specialized in counting a single item and has high counting performance. wakeTell can classify and count multiple items, but its counting performance is lower than that of cazoeTell. ■ Comparison Table (Please refer to the related link (blog)) ■ Comparison of Counting Nuts and Washers Both cazoeTell and wakeTell can count a total of "30" items without any issues. However, cazoeTell only provides the result that there are "30" items without specifying how many of each item there are. On the other hand, wakeTell recognizes and displays the result as "15" nuts and "15" washers. Therefore, wakeTell can also be used for purposes such as forgetting to return tools.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationWe have received a lot of inquiries from customers and have made improvements, resulting in an even more upgraded version compared to the initial release! We are now able to achieve higher precision counts for various objects. Before using cazoeTell for counting, it is necessary to create a learning model tailored to the objects you wish to count. 【Creating a Learning Model】 1) Capture teacher images 2) Perform annotation to create training data 3) Train the AI (create training data) ■Points to note when capturing teacher images ■About annotation ■Tips for annotation "If you're curious but unsure if you can do it," or "I'm having trouble with counting!" Customers interested in cazoeTell should first contact Sky Logic! We will create a learning model tailored to the objects you want to count. *For more details, please refer to the related links (blog).
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationRecently, we have been receiving inquiries asking if the inspection of agricultural products we are currently conducting can be made a bit easier. Using vegetables provided by local farmers, we tested whether we could: ■ Classify potatoes by grade ■ Detect damage on tomatoes using SkyLogic. ▼ Potato Classification We photographed potatoes classified from A to C from different angles (a total of 66 images were taken), and after creating the training data and conducting inspections, we were able to classify them into grades A, B, and C. (All 30 images were classified correctly, achieving a 100 percent accuracy rate.) ▼ Detecting Damage on Tomatoes Since it was difficult to find damaged tomatoes, we tested whether we could detect damage on one type. Similar to the potatoes, we photographed them from different angles (12 images), created training data, and conducted inspections, resulting in the detection of damage as shown in the images. (All 30 images were inspected, and all damage was detected, achieving a 100 percent accuracy rate.)
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationThis time, I will introduce information regarding annotations. Annotations are one of the important settings related to detection accuracy, so I hope you find this useful. Q. Should labels be grouped together or divided into finer categories? When there are multiple detection targets (such as scratches, dirt, and dents), there are two patterns: registering all annotations under the same label "defect" or dividing the labels by shape as "scratches, dirt, dents." DeepSky tends to show improved detection results with fewer labels, so it is generally better to register them under the same label. However, if it is necessary to know which defect has been detected, it is essential to separate the labels by shape. In this case, if a scratch is mistakenly registered as dirt, it can lead to inconsistencies during training, resulting in poor learning outcomes. Therefore, annotations must be carried out carefully to avoid mistakes and oversights. (Figure 1)
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationDuring the verification and after the implementation, we received many valuable questions and reports from users regarding the settings of DeepSky, so we would like to introduce them. Q. How do you determine overfitting? Q. What does a convergence of 0.1 or lower mean? Does it become harder to converge as the number of labels increases? Q. What is the difference between continuing training from a certain point and resetting and retraining? *For more details, please refer to the related links (blog).
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationDeepSky has the capability to communicate the coordinates of detected objects to a higher-level system via TCP/IP, enabling various applications by tracking the coordinates of objects within that system. Here, we introduce software that inspects objects flowing on a conveyor and counts them when they cross the center of the camera. The diagram below (Figure 2) illustrates the configuration of the conveyor, camera, and software. The camera's images are processed by DeepSky to detect the type and coordinates of the objects, and this information is passed to the upper counting software. The counting software tracks the objects and increments the count when they cross the center of the image. This is an image of products moving on a belt conveyor. The upper camera captures images while checking for defective items and counting the products. It can also interact with PLCs to reduce speed or trigger a buzzer when approaching a specified count. The counting software tracks the coordinates of the objects to count them. In this way, DeepSky can work in conjunction with higher-level software, allowing for various uses. Custom software, like the counting software, can also be developed by our company.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationWhen talking with customers, I sometimes feel that they believe AI (deep learning) automatically determines what is OK or NG. This is a subtle nuance, and it's not something that needs to be corrected in the course of actual conversation, but today I want to clear up that confusion by writing this article. Now, regarding the confusion mentioned above, the actual difference is that it is humans who decide what is considered OK and what is NG in the software. This leads to the question, "So what does AI do then...?" To put it simply, what our AI does is "find what it has been taught," and fundamentally, that’s all there is to it. This function is generally referred to as "object detection." In "object detection," it literally detects "objects" within an image.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationIn recent years, with the miniaturization and precision of machinery and equipment, smaller components are often used, and when performing image inspection of such small-sized parts, high-resolution cameras are increasingly utilized for imaging. However, when using high-resolution cameras, a high-spec computer is also required to connect the cameras. Therefore, in this report, we would like to present the operational verification and results of connecting two high-resolution (14 million pixels) cameras to a single computer and starting them simultaneously. As a result of the verification, we were able to display the images from both 14 million pixel cameras on one computer screen (Figure 1). Additionally, the image below shows the image captured by one of the two connected cameras. It is possible to detect parts that are less than 2mm (approximately 0.8×1.6mm) within a field of view of about 100mm square. (Figure 2 shows an enlarged image of the part outlined in red in the overall image on the left.)
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationThis time, I would like to organize the types of AI. A new introduction page for DeepSky has also been created, so please take a look. Now, the term "AI" that is currently in the spotlight refers to "deep learning," which is the type of AI that can distinguish between pedestrians and traffic signs, and even defeat human professionals in chess and Go. In the world of image processing, there are also machine learning-based image processing and what is called procedural image processing. So, what are the differences between them? I think it will be easier to understand the meanings of each by looking at the diagram below. AI (Artificial Intelligence), as the name suggests, means artificial intelligence and has been the most widely used concept since around 1950, meaning "something that replaces human intelligence." In the world of image processing, "procedural" (rule-based) image processing, which processes images taken by a camera according to a set procedure to determine OK/NG, also falls under this category of AI. Our EasyInspector uses this type of AI across a wide range, including color judgment and dimensional angle inspection.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationWe have received several inquiries from various food production and processing sites. We are now able to handle cases that were difficult for EasyInspector (formerly EasyInspector), such as detecting hair and foreign objects mixed in bento boxes, which further demonstrates the wide range of capabilities of our AI (deep learning) functions. DeepSky allows us to teach it with a broader scope regarding what we want to detect, making it possible to identify foreign objects that do not conform to a specific shape. If there are items that are currently being checked by human eyes and you wonder, "Can this be inspected?" please feel free to contact us.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registration▼Feature: Auto Annotation This is a function that automatically executes annotations. It was previously a beta feature, but it has become an official feature starting from Ver. 2.2.0.0. Many of you may know that annotations are very important in object recognition. However, when there are many target objects in an image, the amount of work required for annotation increases, and as the workload increases, mistakes such as forgetting to annotate or making errors inevitably rise. If annotations are forgotten, DeepSky adjusts parameters to determine that "this is something that should not be found (even though it looks similar) for the same object," which can significantly lower the recognition rate of the object, leading to various issues. (See Figure 1: Annotation Forgetting) This is where the auto annotation feature comes into play. By using auto annotation, with just a click of a button, it automatically suggests annotations like "Based on the previous training data, you would want to annotate here, right?" (See Figures 2, 3, and 4)
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationFunction: OK/NG Judgment by Area This is a function that judges not only "how many items were detected in the image" but also "where they were detected." Depending on the application, it may not be possible to make a correct judgment based solely on the total count on the screen. For example, in the case where there are two capacitors positioned alternately, as shown below. In this case, if the orientations of the capacitors are reversed, they should be deemed unacceptable; however, as for the total count, there would be one with the polarity mark on top and one with it on the bottom, resulting in both being considered acceptable. (Figure 1) In such cases, judgments are made by dividing into areas. As shown in the dotted box below, predetermined areas are set up, and the count judgment is performed for each area. (Figure 2) Within each area: "If one upward-facing capacitor is detected, it is OK." "If even one downward-facing capacitor is detected, it is NG." Settings like these are made. (Figure 3) This allows for correct NG judgment even for defective items that have the same count but are positioned alternately. (Figure 4)
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationDeepSky is an image inspection software that uses so-called AI (Deep Learning). By training it on the parts you want to detect, the software adjusts its own setting parameters and learns to recognize them. Here are three features that specifically differentiate it from traditional methods. *We will use the inspection of washer scratches as an example.* ▼ No need for positioning of the inspection target (Figure 1: No scratches, OK) (Figure 2: Scratches, NG) The fact that positioning is unnecessary means that inspections can be performed with the same settings even if the number of washers on the screen varies. This is effective for inspection targets that are difficult to secure. ▼ Can detect even with changes in brightness (Figure 3) It can detect scratches even when the lighting is this dim. This is effective in cases where it is difficult to suppress reflections from metal parts or when dealing with products that come in multiple colors.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationWe received a question from a customer using the image inspection software EasyInspector: "Can inverter fluorescent lights not be dimmed?" We inform customers using EasyInspector that "if the inspection does not have issues with indoor fluorescent lights or inverter fluorescent lights, there is no need to use expensive lighting." Many LED lights have separate lighting and power components, and they can be dimmed using a knob on the power unit, so I thought this question was quite natural. In fact, there are various methods for adjusting brightness: 1) Dimming 2) Aperture of the lens (in the case of industrial cameras) 3) Camera exposure time and gain settings In other words, in principle, if any one of the above three can be adjusted, brightness can be controlled. So, what is the best method for adjustment? (Figure 1) This is a photo of part of a plastic bottle cap. The focus is set on the bottom of the cap. Both images have the same brightness, but you can see that the left image has the focus extending to the edge of the cap compared to the right. This is referred to as "having a deep depth of field."
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationOne of the common concerns when selecting a camera is the interface. In recent years, most manufacturers of industrial cameras have released both GigE and USB3.0 cameras. This is because each of the two interfaces has its own advantages and disadvantages. In this article, I would like to discuss how to choose the appropriate interface based on your application. ■ Price Comparison The differences between the two are clearly defined in the specifications, so I will extract some of that information. By looking at the table below, you may clarify which camera you should choose for your current application. (Note: Manufacturers offer cables up to 5m or 8m, but there may be cases where operation becomes unstable.) ■ Power Supply for GigE Cameras When connecting to a PC via GigE, you cannot power the camera directly from the PC even if you connect the PC and camera with a LAN cable. Therefore, you need to either power the camera directly or use a separate PoE injector or PoE hub between the PC and the camera to supply power to the camera. (Note: PoE hub capable of powering 4 cameras (BS-GU2005P))
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationThe types of camera light sensors can be broadly divided into two categories: "global shutter" and "rolling shutter." The difference between these two lies in whether all the pixels of the sensor are exposed simultaneously (global shutter) or if they are exposed one line at a time from the top down (rolling shutter). How does this difference in mechanism affect the image? A typical example is the distortion shown below. When capturing a moving object, with a global shutter, a spherical object will be photographed as a sphere, but with a rolling shutter, because exposure occurs line by line from the top, the object may have moved during the exposure time, resulting in distortion. Thus, the differences between the two become apparent when photographing moving subjects. One might wonder, "Wouldn't it be better to use only global shutters?" However, in terms of manufacturing costs, rolling shutters are more advantageous (lower cost). (Figure 1: Left: Global Shutter, Right: Rolling Shutter)
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationAs explained in "Whether High Pixel or Low Pixel (2)," industrial cameras come with various sensor sizes. Generally, they range from 1/3 to 1/2 inch, but there are also sizes like 1 inch. A larger sensor size means that, with the same number of pixels, the pixel size increases, resulting in: - A larger light-receiving area per pixel, allowing for shorter exposure times. - Less noise in images when using the same exposure time. - Sharper images with the same lens focusing performance (it might be easier to understand if you think that a larger pixel size makes the focus target larger). These are the advantages (however, larger sensor sizes are more expensive to produce, so the price also increases accordingly). One point to note is that if you want to change the sensor size from 1/2 inch to 1 inch to improve image quality, the angle of view (field of vision) will also change, even if you use the same lens.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationIn "(1) High Pixels or Low Pixels," we discussed the advantages of high-pixel cameras in capturing details clearly. Here, I would like to write about the advantages of low-pixel cameras. The advantage of low-pixel cameras can be summed up in one phrase: "processing is faster." There are two reasons for the faster processing. 1. There are fewer pixels to handle as image data. 2. The exposure time can be shorter, resulting in less time needed for shooting. I believe there is no room for doubt regarding point 1. So, why can the exposure time be shorter in point 2? Let me explain. The size of the camera sensor (the square light-receiving part shown in the initial photo) varies, but in industrial cameras, sizes around 5mm square to 7mm square are common (1/3 inch to 1/2 inch). However, as mentioned in "High Pixels or Low Pixels (1)," the variation in pixel counts ranges from about 300,000 pixels to around 14 million pixels. At this time, what differences exist in the size of the pixels?
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registration"High pixels mean high performance" — Indeed, as the pixel count increases, it becomes possible to capture details more clearly. However, it does not simply mean that larger is better, as high-pixel cameras also have drawbacks. For example, using a camera with unnecessarily high pixel counts can lead to longer image processing times. Therefore, it is necessary to consider whether "it is really impossible to make a judgment without using a high-pixel camera?" when selecting the pixel count of a camera. The example below shows the results of an experiment on how much difference in clarity there is when enlarging a part of a PC motherboard from 300,000 pixels to 14 million pixels. (Refer to the figure) In this way, high pixel performance is demonstrated when trying to see fine details within a wide field of view. However, as mentioned earlier, not only does image processing take longer, but as will be introduced in "High pixels or low pixels (2)," it is also necessary to increase the exposure time. The consideration of whether "it is really impossible to make a judgment without using a high-pixel camera?" is one of the important factors in selecting a camera.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationAs a representative of inexpensive cameras, there are webcams that can be connected to a computer via USB. Their prices start from around 1,000 yen, making them quite affordable, but it's good to understand the reasons why industrial cameras are generally used for image processing to avoid potential issues. Of course, the cheaper option increases cost-effectiveness, but in my experience, there are "3" customers using webcams compared to "7" customers using industrial cameras. Conversely, this means that 30% of customers can operate effectively with a webcam.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationThe aperture acts like the iris (brown eye) in the eye, adjusting the amount of light that enters the camera's image sensor, while also affecting the depth of field and the camera's exposure time. This time, I would like to examine depth of field and exposure time. ■Depth of Field ■Exposure Time ■How to Increase Depth of Field While Reducing Exposure Time
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationLenses that allow for adjustments in aperture and focus have a value called "focal length," which relates to the angle of view (shooting range). When expressing the specifications of a lens, it is common to represent the maximum aperture value and focal length together, such as f1.4/12mm, which indicates that "focal length" is an important specification. Here, we will introduce how to choose a focal length. *For more details, please see the related links.*
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationWhen inspecting with images, the quality of the light that enters the camera's light sensor significantly affects whether the defects you want to detect can be found or not. 1: Influence of Reflected Light One of the factors that changes significantly depending on the angle of illumination is reflected light. For example, as shown in the example below, even when photographing the same object, the appearance can change completely depending on the angle of the lighting. (Figure 1: Left: When illuminated from directly above, Right: When illuminated from an angle) The sample above has a moisture-proof coating applied to the surface of the substrate, giving it an overall glossy appearance. When photographing such an object, if light is directed straight on, the light source may reflect directly, resulting in an undesirable image. In such cases, the lighting is angled. (Figure 2: Left: When illuminated from directly above, Right: When illuminated from an angle) The blue arrows in the above diagram indicate the reflection of the light source. When light is directed from directly above, the reflected light enters the camera directly, but when illuminated from an angle, the reflected light escapes to the opposite side, allowing you to avoid direct reflected light.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationI would like to write about various types and characteristics of lighting, as well as their main uses. Please choose the optimal lighting according to your needs. ■ Bar lighting ■ Ring lighting ■ Low-angle ring lighting ■ Backlight (transmitted light) ■ Backlight + polarizing filter ■ Coaxial illumination ■ Dome lighting *For more details, please refer to the related links.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationIn the manufacturing industry, quality control and quality assurance are essential tasks for maintaining corporate trust. Among the various quality control tasks, "visual inspection" occupies a significant portion. By conducting visual inspections at each stage of the process, the yield of downstream processes can be improved, and the outflow of defects can be prevented in advance. However, this can also increase manufacturing costs and, in some cases, the visual inspection may become a bottleneck, leading to a decrease in production capacity. Here, we will discuss the points to consider for automating visual inspections, the differences between traditional image processing and the latest AI image processing, and trends in visual inspection overseas. ■ What is visual inspection? ■ Main inspection items of visual inspection ■ Automated visual inspection as an alternative to manual inspection ■ Advantages and disadvantages of manual inspection ■ Advantages and disadvantages of automated visual inspection ■ Steps for implementing automated visual inspection ■ Will AI (deep learning) visual inspection replace traditional rule-based image processing? ■ Situations where procedural image processing is used in visual inspection ■ Situations where AI is used in visual inspection ■ Points to consider when automating visual inspection ■ Trends in automated visual inspection overseas ■ Summary ■ Visual inspection systems that can be implemented at low cost *Please see the related links.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationWe reported the presence or absence of tears in shrink-wrapped products from food manufacturers through a free simple verification process. We would like to continue verifying issues such as deformation, stacking collapse, and flap adhesion, and propose various defect detection methods. In the simple verification, we were able to accurately identify approximately 80% of tears that are easily noticeable by the human eye, and about 50% of tears that are less noticeable, such as those on white products. This time, we conducted the verification with a limited number of samples, and I believe the accuracy was affected by the small amount of training data. The detection accuracy will improve with more data for training. [Software Used] Deepsky
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationWe received inquiries from an electronic equipment manufacturer regarding the display lights and meter readings of various control devices. We have decided to set up and report on six types of reading functions. The two images on the left and right utilize the "meter reading function." For the other three images, we determine pass or fail based on whether the color is detected when lit or not. However, I believe there is variability in how the lighting hits the upper and lower parts of the meter, so I think that illuminating the lower part, which tends to be darker, with bar lighting or similar would reduce misdetections. Regarding the other image we received, reading was possible with the image you provided. Additionally, some minor adjustment for misalignment is possible, but fixing the camera is essential.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationYou sent us photo samples of defective conditions that occasionally occur in the silk printing process. Is it possible to introduce inspections for issues like missing characters in silk printing through image inspection? This is a consultation regarding that. We will conduct a simple inspection for "missing" and "fading." By using the "comparison with master image" function of EasyInspector, we were able to conduct the inspection. We performed verification on the sample with the smallest visible defect among the samples we received. As a result, under limited conditions (*), we were able to detect the defect of missing characters. However, outside of those conditions, due to the nature and shape of the product, we mistakenly detected many areas that were not defective. *The limited conditions refer to when the inspection was conducted by focusing on one specific area of the printed part, so if we try to operate under the same conditions, the inspection range will likely be quite limited.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationCounting parts is a common inquiry, and this time it involves a plastic parts manufacturer. We feel that the demand for inspection automation due to labor shortages is increasing year by year. We conducted inspections using a software called DeepSky, which utilizes AI (Deep Learning). By training the software on the areas we want to detect, it adjusts its own settings and learns to recognize them. Upon verifying the samples we received, we found that counting was possible. However, for samples without specified trays, when they were in close contact or overlapping, we were able to validate the inspection items by making adjustments. Regarding inspection cycle time, the EasyInspector we submitted previously has a shorter cycle. However, since counting items not placed in trays is difficult to determine, we decided to validate this time using DeepSky. The inspection cycle time with DeepSky depends on PC specifications, but it is approximately 0.3 seconds per image (1 inspection). With DeepSky, you can utilize higher-level software that supports counting on conveyors. It is designed to accommodate various types of counting.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationThis is a request from a metal processing manufacturer to count the number of 7-meter long rods without positioning them. This time, the rods were unpainted, but they had a level of accuracy comparable to the previously tested white-painted rods. However, items with significantly different sizes or angles may be inaccurate due to lack of training. The left image shows the annotation, while the right image displays the detected work for counting as points. The strength of AI (deep learning) image inspection software over traditional rule-based image inspection software is its ability to perform inspections even with variations in brightness and imaging environments, such as outdoors. Inspection software can be beneficial across various industries and processes. At DeepSky, you can utilize advanced software that supports counting on conveyors. It is designed to allow counting through various operational methods. 【Software Used】 Deepsky
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationThe confectionery manufacturer in this inquiry is packaged in cardboard boxes, just like other industries. We will conduct a simple verification to check whether the manufacturing date is printed on the cardboard box. The EasyInspector software (traditional rule-based image inspection software) that you inquired about is primarily used by customers in the automotive parts and electronic circuit board sectors, making up half of our clientele. Additionally, it is utilized by a wide range of industries for tasks such as assembly verification and inspection of minor product dirt. We plan to use the EasyInspector feature called "Color Presence Inspection." This function allows us to specify an ink color and determine whether it falls within the specified range for a pass/fail judgment. In the left image, the specified color is detected as red, and a passing blue frame is displayed. In the right image, the specified color could not be detected, resulting in a failing red frame. It is also possible to conduct simultaneous inspections at two locations by using two cameras.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationThere was an inquiry about wanting to remotely check the analog instruments in the factory. ■ Indoor Meters For indoor analog meters, I believe reading them is possible. However, if inspections are to be conducted at night, it is necessary to maintain consistent brightness between day and night. ■ Outdoor Meters Outdoor meters will likely require lighting. Additionally, even with lighting, there may still be variations in brightness and shadows between day and night. It seems necessary to install something to enclose the meter or a roof-like structure for the meter. Another reason why reading outdoors is difficult is due to power supply issues. By using EasyInspector's "Meter Reading" function, it was possible to read measurement values from one location. Regarding the difference in light intensity, the image on the right can be read, but the left image is misreading a different part as the needle, resulting in an incorrect judgment. Creating a stable imaging environment for reading is crucial.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationOur inspection system is also in operation in the architecture and construction industry. We verified the reading of a 7-segment display meter. By using the "OCR" function of EasyInspector, we were able to determine 7 characters. The image itself was slightly blurred, and there were color inconsistencies in the image. When we extracted and processed a portion of the image that seemed easier to read, it was not processed very cleanly, but we were able to read it nonetheless. I believe that improving the imaging conditions would allow for clearer readings. [Software Used] EasyInspector710 (formerly EasyInspector) The current 'EasyInspector2' CP (Control Panel) package [Digital 7-segment display reading] can be inspected with 'EasyMonitoring2'.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationWe consulted with another company with a budget of 500,000 yen, but they said that a precise inspection could not be done in just a few seconds. We received a request from a distribution and logistics manufacturer who wants to implement a process where, without needing precision, they can make a preliminary judgment in a few seconds on whether there is a "possibility of being NG" for specific items, which would then be checked by a person. This is regarding the inspection of containers with cushioning materials for transportation. By using EasyInspector's "comparison with master images" function, we were able to determine abnormalities in 12 locations of the cushioning cardboard in 0.25 seconds. The left image successfully detected and judged a misinstallation of the Santec foam component. The right image incorrectly judged the defective part. To ensure accurate judgment, it is important to enhance the imaging environment by adding lighting to make the image brighter, clearly darkening the shadowed areas, and adjusting the camera position to be further away and as vertical as possible to minimize hidden parts due to the partition board.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationWe have received inquiries about our inspection software from shipping companies as well. After packing products in the warehouse, they are considering incorporating an image recording process into the workflow to leave evidence that there are no missing products. Our inspection software is being utilized in various industries. By using the "Presence of Specified Color Inspection" feature of EasyInspector, we were able to determine the pass/fail status of the number of items divided into six areas in 0.39 seconds. If the condition is normal, the detection of the specified color above the reference value results in a "pass" judgment. The left image shows the settings screen, while the right image displays the inspection results table. As for the recording method, it is possible to save the results as images or record them in CSV format.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationIf the side dishes lined up in the supermarket overflow from their trays, there will be concerns about leakage and poor presentation, which may lead to unsold items. This time, we received a request for a simple verification regarding the overflow of food placed in trays from a food manufacturer. By using EasyInspector's "Presence of Designated Color" feature, we were able to inspect one area (the entire screen) in 0.39 seconds. The left image shows the designated color settings, where we set the color of the food that may overflow. The right image shows the mask settings (specifying non-detection pixels), where the specified area will no longer detect the designated color. 【Software and Equipment Used】 Software Used: EasyInspector (formerly EasyInspector) Field of View: Approximately 10 x 8 mm Minimum Size of Inspection Target: 5 mm Number of Inspection Points: 1 Camera Resolution: 1.3 Megapixels Lens Focal Length: 6 mm Distance from Lens to Product: Approximately 165 mm Lighting: Ring Lighting Distance from Lighting to Inspection Item: Approximately 120 mm The current 'EasyInspector2' color package can be used for inspection with the "Presence of Designated Color" feature.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registration