Meetings

Reports and slides of user group meetings will be available here. 

Supervisory group meeting 1: project kick-off (2/12/2019)

December 2, 2019, campus De Nayer, Sint-Katelijne-Waver

 

Supervisory group meeting 2 (29/6/2020)

June 29, 2020, online

Participants: Bart Moyaers, Kim Rutten (Arkite), Edouard Charvet (Scioteq), Frederic Pereda, Kun Liu (Aperam), Geert Caenen (Xenomatix), Gert Morlion (Vlaamse Waterweg), Jan Kempeneers, Samuel Milton (Sirris), Johan De Coster (SICK), Maarten De Munck, Ricardo Elizondo (Kapernikov), Peter Geirnaert (Urban Waterway Logistics), Piet Opstaele (Port of Antwerp), Yanming Wu, Dimiter Holm, Fayjie Abdur Razzaq, Eric Demeester, Peter Slaets, Patrick Vandewalle (KU Leuven)

In this (online) meeting, we presented overviews of available point cloud technology: sensors to capture point clouds, (open source) software and libraries to process them, and available data sets. We also presented an overview of case studies proposed by the participating companies, and discussed some of these. 

Agenda and presentation slides:

Questions:

  • Q: Time of flight cameras have many parameters, making it difficult to choose an appropriate setup. 
    A: In a next phase we plan to evaluate several cameras in the lab and in a practical setting. 
  • Q: Will there also be price indications with the various cameras?
    A: Yes, we are working on that, but that part of our list is not complete yet. 
  • On checking with the audience, open source tools are preferred by the participating companies. No need to also investigate commercial packages.

Use case discussion: 

  • There is an interest in merging point clouds, for example captured along a railroad track. In such a case vegetation can vary, but multiple measurements can be merged based on fixed items such as houses or poles along the track.
  • As several participating companies are active in waterway navigation, that would be an application of broad interest. It can even be broadened by including navigation on roads or in orchards, with many common problems.
    Similarly, localization and orientation of a plane during landing (approaching the runway) would be interesting. 
  • Another area of interest is in quality control, determining if a product satisfies the quality requirements. 
  • Position and pose estimation in indoor logistics are also very relevant, allowing the use of automated cranes and object localization. 
  • Companies agreed that there are many interesting aspects to explore, and suggested to also leverage the expertise of the participating research groups. 
  • We agreed that further details and discussions can also be held on the project Slack channel. 

 

Supervisory group meeting 3 (25/11/2020)

November 25, 2020, online

Participants: Bjorn Van de Vondel (DSP Valley), Edouard Charvet (Scioteq), Kun Liu (Aperam), Frédéric Pereda (Aperam), Geert Caenen (Xenomatix), Jan Kempeneers (Sirris), Samuel Milton (Sirris), Ricardo Elizondo (Kapernikov), Maarten De Munck (Kapernikov), Kim Rutten (Arkite), Piet Opstaele (Port of Antwerp), Sven De Craemer (Colruyt), Yves Hacha (TRESCO), Yanming Wu, Dimiter Holm, Fayjie Abdur Razzaq, Eric Demeester, Peter Slaets, Patrick Vandewalle (KU Leuven)

Excused: Gert Morlion (Vlaamse Waterweg), Piet Creemers (Vlaamse Waterweg), Mark Vanlook (EUKA), Tom Heiremans (VLAIO)

In this (online) meeting, we presented the current project status, as well as our current work on related work and point cloud case studies on object detection, pose estimation and tracking from 3D point clouds. These presentations were followed by some feedback and discussion. 

Agenda and presentation slides:

Questions:

  • Q (Yanming's presentation): In the pose estimation experiments, the pose was reset every 15th frame. If not, would the system be diverging or would it still come back to a good result?
    A: Indeed, a regular reset seems required for industrial applications with the current method. 
  • Q (Fayjie's presentation): For object detection, will methods vary a lot depending on the object complexity (coils vs more general convex/concave objects)?
    A: This will depend mostly on the data set and object size. 
  • Q (Dimiter's presentation): Were game engines also evaluated to be used as a simulator?
    A: Yes, that is also an option (included at end of presentation), but this usually requires more work. 
  • Overall, the companies are broadly interested in applications of point clouds for detection, pose estimation and tracking, definitely also including practical experiments.
  • Comment about use cases: self-localisation is also of interest to several companies (Arkite, Scioteq). 
  • General comment: congratulations on the nice and thorough job done so far!

Supervisory group meeting 4 (7/6/2021)

June 7, 2021, online

Participants: Dieter Therssen (DSP Valley), Frédéric Pereda (Aperam), Gert Morlion (Vlaamse Waterweg), Edouard Charvet, Heikki Deschacht (ScioTeq), Jan Kempeneers (Sirris), Koenraad Van De Veere (Phaer), Petra Van Mulders (Flanders Make core lab EUKA), Stijn Debruyckere (Arkite), Sven De Craemer (Colruyt), Yves Hacha (Tresco), Geert Caenen (Xenomatix), Tom Heiremans (Vlaio), Yanming Wu, Dimiter Holm, Fayjie Abdur Razzaq, Eric Demeester, Peter Slaets, Patrick Vandewalle (KU Leuven)

In this (online) meeting, we presented the current project status, and in particular the ongoing point cloud case studies on object detection, pose estimation and tracking from 3D point clouds. These presentations were followed by some feedback and discussion about potential further use cases. 

Agenda and presentation slides:

  • Introduction and project status (Patrick Vandewalle)
    Two workshops will be organised:
    • September 7, 2021: Hands-on with point clouds
    • February 2022: Final workshop with showcase of project results
  • 6D object pose tracking experiments and use cases (Yanming Wu)
    Q&A:
    • Q: The tracking model was trained with 200k images. Is this also taking z-axis into account (distance from camera)? Would it work as well, or need additional training data?

      A: This depends on the sensor (Kinect was used in current experiments, which is not very good for close range, only OK starting at 80cm). We could also take the relevant distance range into account when generating data.
    • Q: What if the object is moving fast, e.g. a moving car or captures from a drone? 

      A: The algorithm does not work so well for faster objects (see cylinder tracking demo). It could be improved when allowing more variation in training.
    • Q: Would a faster frame rate also help for fast motion?

      A: yes, the problem is mostly in the difference between consecutive frames.
    • Q: Does the neural network also give indications about the level of confidence in the returned pose estimate?

      A: no, currently no confidence estimation is provided.
    • Q: A distance closer than 80cm would be a problem, but is there algorithmically also a maximum distance?

      A: From an algorithm point of view there is no limitation. However, the performance will probably degrade as the object becomes smaller at large distances.
  • Object detection from 3D point clouds (Fayjie Abdur Razzaq)
  • Use cases IMP: multimodal sensor module and point cloud-based positioning (Dimiter Holm)
    • Q: Was the point cloud dataset for registration with the satellite image an existing dataset or how was it captured?
      A: This dataset was collected in an experiment last month using the Cogge research vessel, and was manually aligned with the satellite image.
    • Q: Are you building a map from a moving sensor setup?

      A: The vessel was moving on a straight line, but compared to the map this was only a very small scale movement; map building would extend to other and larger areas.

Potential further use cases

  • Semantic point cloud segmentation (ongoing)
  • Small object detection on road using solid-state lidar data
  • Obstacle detection on runways
  • 6D pose estimation and tracking (ongoing)
  • Conveyor belt 3D measurements
  • Scrap volume measurement
  • Photogrammetry for 3D object capture
  • Ladle tracking
  • Fully automated coil loading on a truck
  • Sensor box measurements on waterways (ongoing)
  • Point cloud based self-localization on a 2D map
  • Benchmark point cloud measurements on waterways
  • Sensing under varying weather conditions
  • Map-building and public dataset

Discussion

  • Sirris is mostly interested in 6D pose estimation (and tracking), considering a basic use case estimating the 6D pose of a part in front of a robot setup or on a CNC machine before starting to work on the part (milling etc); a 3D model of the part could be available (CAD) or not (nearly correct CAD model), or milling from weirdly shaped
  • 3D line scanner using conveyor belt setup
    • status: conveyor belt has been set up, with a regular line scan camera; the conveyor belt is (currently) at fixed speed (240mm/s), 600mm wide, 1120mm long, white and green conveyor belts
    • Phaer provides a loan of a (single) Chromasens 3D PIXA camera setup for 3D line scanning with engineering support; the 3D PIXA requires a linear movement, so it is mostly applied in conveyor belt applications, but other application areas could be inspection of roads or landing strips; the best camera and illumination setup will be selected based on the chosen test case; the 3D PIXA system uses color for its 3D measurements, and also returns this color information; it will take a few weeks to get the setup ready after a choice has been made
    • Keep in mind that the Chromasens system is a very performing method, based on stereo (texture required), quality of illumination is critical; the type of illumination is to be chosen in function of the type of object and distances; 3D PIXA is a product line, with about 16 configurations, from micrometer to mm range, and with a scan range of up to 2m (with configurations for measuring chips as well as for handling parcels in airport handling)
    • Eric will discuss further with Kapernikov, Optimum and potential other interested companies to make a final selection by the end of June
    • Phaer also has a good relation with Fizyr (used to be Delft Robotics), a company specialized in static bin picking applications software; they are also looking into objects moving along a belt, and have a 3D PIXA setup
  • Tresco agrees to have the sensor box for measurements on waterways on one of their vessels
  • IMP is also planning to position sensors on a lock, to test sensing on varying weather conditions and detection of positions of vessels in a lock, which could be of interest to Vlaamse Waterweg and Port of Antwerp
  • ScioTeq is interested in detecting obstacles on runways or for helicopter landing; no data is available in-house yet, so main question is how to collect data; possibly most feasible with a drone
  • Aperam is interested in scrap volume measurement, relating to the quality of melts, for various applications; photogrammetry can be an option for this, but seems to require a long computation time; it could also be interesting to be able to separate smaller and bigger pieces of scrap on a conveyor belt setup; it is not sure whether color is also relevant for this application, currently mostly size and volume are of interest
  • Does the self-localization on a 2D map refer to Dimiter’s presentation, doing self-localization of a 2D point cloud on a 2D map? Sirris asks whether the inverse would also be possible with a CAD model (with 3D point cloud) and a 2D picture. A keypoint-based approach may help there. To which plane is the 3D point cloud flattened when localizing on the map? The captured point cloud has a cartesian coordinate system with axes aligned to the map (using other sensors), so the horizontal plane is selected straightforwardly.

We would like to thank all participants for their presence and their contribution in the discussion. Of course further offline comments are also welcome via mail.

Supervisory group meeting 5 (22/11/2021)

November 22, 2021, online

Participants: Frank Dekervel, Kristina, Maarten De Munck (Kapernikov), Dieter Therssen (DSP Valley), Edouard Charvet (ScioTeq), Frederic Pereda (Aperam), Gert Morlion (Vlaamse Waterweg), Hung Nguyen-Duc (Xenomatix), Jan Kempeneers (Sirris), Johan De Coster (Sick), Koenraad Van De Veere (Phaer), Petra Van Mulders (Flanders Make - EUKA), Ken Hendrickx, Yves Hacha (Tresco), Stijn Debruyckere (Arkite), Dimiter Holm, Maarten Verheyen, Eric Demeester, Mathijs Lens, Peter Slaets, Patrick Vandewalle, Abdur Razzaq Fayjie (KU Leuven)

In this (online) meeting, we presented the current project status, and in particular the ongoing point cloud case studies on object detection, pose estimation and tracking from 3D point clouds. These presentations were followed by Q&A, which is reflected below. 

Agenda and presentation slides:

  • Introduction and project status (Patrick Vandewalle)
  • 6D pose tracking and 3D person detection (Yanming Wu)
    • Q: Do you need to retrain tracking each time a different tool is used? 
    • A: Yes, you need to generate a training set that is specific for the object to track.
    • Q: Kalman filter for prediction is using white Gaussian noise (p22). Can you explain?
    • A: Prediction needs to be based on a physical model, so we are using AWGN as a model for the acceleration.
    • Q: From a given position of a static object, would it also be possible to calculate the 6D pose of a camera?
    • A: Currently the estimated pose is a 6D pose relative to a static camera. The 6D pose gives the relative transform between the object pose and the camera, so it could indeed also calculate the 6D pose of the camera (with a static object). 
  • Radiological mapping (Eric Demeester)
  • Chromasense line scan setup (Maarten Verheyen)
    • Q: What is the measuring principle of the Chromasense sensor?
    • A: There is a wide range of Chromasense sensors, but they all use a stereo vision system, with corresponding illumination for optimal results.
  • Point cloud collection on inland waterways (Dimiter Holm and Robin Amsters)
    • Q: Is it true that rain does not affect the lidar measurement quality?
    • A: Indeed, the effect of rain on lidar image quality is very limited. We believe this is because droplets are only present during short instants while the capture integrates over time.
    • Q: Could we also use it for detecting line markings on a road surface?
    • A: This will depend mostly on reflectivity differences, such as a smooth paint surface vs a rough concrete surface should be clearly distinguishable.
  • Object segmentation from 3D point clouds (Abdur Razzaq Fayjie)
  • Obstacle detection: detecting a tire on the road (Mathijs Lens)
    • Q: Would another sensor type with a higher density help?
    • A: Yes, this would probably help. With sometimes only 3 points on the tire, the task is very difficult and this does not allow to use geometric features optimally. 
    • Q: Would it be possible to combine multiple frames to augment the number of points on the tire?
    • A: Yes, this could be an interesting option to get denser point clouds. 
  • Discussion and input from user group

Final project symposium (7/3/2022)

March 7, 2022 - Aula van de tweede hoofdwet, Heverlee

In this open meeting, we presented our final project results. These presentations were followed by Q&A, a demo session and reception. 

Agenda and presentation slides: