Summer Virtual Series Session 1

Virtual Attendee

Event Date

Location
Virtual

Full session moderated by: Calvin Cupini, Clean Air Carolina & Ethan McMahon, US EPA

Web Tools for Sensor Data

Presented by: Graeme Carvlin, Puget Sound Clean Air Agency

Summary: At the Puget Sound Clean Air Agency, one of our main goals is to characterize and communicate air quality with the active participation of the public. Lower cost air sensors are now a large part of that discussion and we are interested in communicating their benefits, drawbacks, and how to use them to get meaningful data.

To this end, we offer air sensors to individuals and groups through our air sensor lending program (https://pscleanair.gov/539/Air-Quality-Sensors). First, we contact the applicant and discuss their air quality concerns. This discussion is often enough to answer their questions or point them towards appropriate resources. If sensors would help answer their question, they are given the sensors along with documentation on how to operate them and interpret the data.

We often have members of the public ask us why the sensor closest to them is reading much higher than our reference monitors. Purple Air and other sensor manufacturers’ data displays often show the public uncalibrated and unfiltered sensor data. To lower the barriers to correctly interpreting sensor data, I have developed a map that combines reference monitors with quality controlled and calibrated Purple Air data (Sensor Map: map.pscleanair.org and description page: https://pscleanair.gov/570/Air-Quality-Sensor-Map). The Sensor Map has a Health view, which shows a health-based PM estimate, and an Instant view, which shows 1-min data and is useful during air quality events, such as wildfires.

One of the biggest challenges for community groups who want to work with sensors is taking the sensor data and creating a summary report. The Community Reporter is a tool I developed to ingest raw data from a variety of air sensors; QC and average the data; and create a summary report with graphs, maps, and text. It is our hope that these tools and the discussions that arise from their use will help the public interpret air sensor data and answer their air quality questions.

Follow Up Questions and Answers:

  • For the community sensor data (Purple...) How do you do data provenance?  For example, if you learn that one sensor was failing or sitting next to a pollution source which corrupted the data, how do you let users know that the data is suspect?
    • Data are flagged by the QC and calibration algorithms.  Those flags are added to the output Excel files when using the Purple Air Downloader and Community Reporter.  Currently we do not have that data on the Sensor Map, but we would be open to exploring how that might be done.  Also, we hope to have our scripts and algorithms available online (i.e. GitHub).
  • If we have a Dylos, can we contribute data?
    • Dylos data can be used with the Community Reporter to help analyze collected data.  The Sensor Map currently shows only Purple Air sensors.
  • How do you calibrate pm sensors to ug/m3
    • Purple Airs can be calibrated by placing them next to a reference monitor, collecting data for a period of time, then comparing the measurements.  Using linear regression, the Purple Air data can be adjusted to read similarly to the reference monitor.  For the Sensor Map, sensors that are nearby reference monitors are calibrated to them.  Sensors that are not close to a reference monitor are calibrated using the US EPA's national equation with temperature and relative humidity.  "Nearby" is based on a semi-variogram, which measures how similarly the sensors respond based on distance.  The basic form of the calibration equation is: Ref = Purple Air + temperature + relative humidity.  Where “Purple Air” is the QC’d PM2.5 ATM output from the A and B sensors.
    • If you have a Purple Air that you want to calibrate, a basic method to see adjusted data would be to go to the Purple Air map and select a calibration equation in the conversion dropdown (bottom left of the screen).  If you want the highest accuracy, I would suggest locating it close to a reference monitor in your area.  You could contact your local or state air agency to see if they would be willing to help you with this.
  • How are these sensors regarded by the science community? Can they be used for purposes other than public awareness (ex. enforcement action or research)? Also, can you expand on calibration and accuracy?
    • The EPA has a framework to understand what sensors can be used for in their Air Sensors Guidebook (https://www.epa.gov/air-sensor-toolbox/how-use-air-sensors-air-sensor-guidebook).  I have analyzed Purple Airs against this framework (for ASIC 2018) and, if properly calibrated, they can be used for education as well as supplemental monitoring.  They cannot be used for enforcement action or regulatory purposes.  However, community science can be an important tool in affecting government change.
    • The sensors are calibrated to the nearest reference monitor if the R2 of that calibration is greater than 0.5, otherwise they are calibrated to the US EPA national equation with temperature and relative humidity.
    • Angela / Safecast: In our monitor development we’ve only considered sensors that are already accepted and in use within the science community. Safecast isn't about "public awareness" but rather creating useful datasets. It's specifically because these sensors are accepted that we use them.
  • Does the hypothesis testing function check for statistical significance and take variability into account?
    • Yes, a t-test is used to compare groups and the p-value is used to determine the result.
  • How do you also reconcile differences between sensor readings such as dylos and purple air? And also, I was wondering how often you have such situations of problematic sensors with sudden failures and attenuation
    • The Community Reporter has specific QC filtering functions for each type of sensor that are based on how those sensors fail.
    • The sudden failures are more common than attenuation and both together account for about 5% of all data.  Very rarely are both sensors broken at the same time.
    • Angela / Safecast: As we haven't deployed air sensors en mass we've not had this problem. With our radiation sensors if a sensor goes bad we take it down.
  • We saw 45,000 on our calibrated Dylos during the wildfires of Sept 2018... one could taste the smoke on those days.
    • Wow!  Yes, very high concentrations (above 1000 ug/m3 or even 3000) can definitely be seen, especially close to a source.  I would like to explore making the QC'd data, which is normally removed from the Sensor Map, available.  The Community Reporter and Purple Air Downloader have the raw data as well as QC'd in the output Excel file.
  • Are the PurpleAir sensors sited/deployed by your agency or do these also include public citizen sensors as well?
    • We have deployed about a dozen Purple Air sensors.  The rest of the sensors were deployed by the public or other groups.  People who borrow a Purple Air from our lending program would see it pop up on the map within a few hours.
  • Are Purple Air Downloader and Community Reporter public available?
    • The Purple Air Downloader and Community Reporter are not publicly available yet, but I hope to make them available soon.
  • What kind of safeguards are placed within the data screening protocols to prevent automatic discarding of real elevated data from one sensor compared to those in its vicinity?
    • Great question!  If there is a sensor next to a source, say a firepit, recording real elevated data then the two sensors inside the Purple Air will agree with each other (pass the intra-monitor QC), but won't agree with nearby Purple Airs (fail the inter-monitor QC).  If you want to cut out extreme, but valid, data then you can have the comparison between monitors take precedence over comparison between the two sensors within the monitor.  However, if the inter-monitor QC is applied only when the two sensors don't compare well to each other, then these real elevated data can be preserved.  It really depends on what your goal is.
  • Really nice framework for QAQC, more networks need to incorporate all these steps. Follow up questions - Are all the sensors calibrated to a reference monitor? Are data from both A and B invalidated when there are large differences between the two measurements or when failure of one is obvious?
    • Sensors that have an R2 of >0.5 with the nearest reference monitor are calibrated to that monitor.  All other sensors are calibrated using the US EPA's national equation with temperature and relative humidity.
    • About 5% of the time the A and B sensor don't agree with each other and neither is obviously wrong (very low or very high).  When this happens, they are compared to the sensors of nearby monitors -- about 2/3 of the time both sensors turn out to be valid and 1/3 of the time one sensor is preferred over the other.
  • Are you considering chemometric analysis of the real-time data from the network of sensors? something in the area of Principal component analysis.
    • Not for our work since the sensors only measure particle counts and do not take samples.
  • Have you taken temperature or RH into account for any correction algorithms for purple air sensors?
    • Yes, temperature and RH are included in the calibration equation.  The basic form of the calibration equation is: Ref = Purple Air + temp + RH.  Where “Purple Air” is the PM2.5 ATM output.
  • AWS Shiny Interface
    • The AWS Shiny Interface is in beta and is not publicly available yet.  But we hope to release it in the near future!

Public Institutions and Trust: Developing Community Data Assets

Presented By: Angela Eaton, Safecast

Summary: Safecast believes that people make better health and living decisions when individuals self-generate and publicly share environmental data. As a community-contributed data platform, Safecast emphasizes raw data, transparent collection, air quality standards that allow data sets to be used in combination, and free data access to provide the greatest relevance to communities and individuals. 

This abstract proposal centers on Safecast’s outdoor air quality monitoring efforts with the Los Angeles Public Library, a system of 72 branches and Pasadena Public Library, with 9 branches. The pilot project places 23 monitors at branches where librarians will train and nurture local groups of environmental scientists to review and respond to real time localized data collected at their branch and across Los Angeles and Pasadena. Network density, then, is not about evenly spreading numbers of units in a quadrant or siting monitors at known problem spots. By letting the librarians choose where monitoring takes place, the data becomes personalized - a representation of air quality in numbers that is directly experienced by each neighborhood. 

Librarians are challenged to stay relevant in an age where online information eclipses what can be stored within its walls. As the branches are already trusted information, meeting, and discussion spaces, the Los Angeles and Pasadena Public Libraries are successfully using the physical “information space” to extend into the online “information space.” In this way data generated from these monitors is co-created by the community and the civic-funded organizations. Trust and access between the community and the Library are increased and scientists within the community will make use of the data to whatever ends best suits their goals for better air quality. Safecast is proud to support neighborhoods, individuals, and organizations in producing reliable environmental data and we are excited to engage ASIC participants to better serve this goal.

Follow Up Questions and Answers:

  • How do you assure standardized and quality data collection with untrained people as data collectors?
    • It doesn't take scientific training to turn a device on and we believe in, and it's exactly this kind of nonsense from academics, trying to paint the public as ignorant that stands in their own way.
  • How do you ensure data provenance? As we move to more and more machine learning and artificial intelligence systems,  raw data that might have defects in it can AND will result in biases that the ML/AI engines base their decisions on.   If a researcher discovers that some nodes were defective - how to you warn researchers that those sensors may really not be trust worthy nodes?
    • If the ML/AI is adopting poor data standards that’s a problem with the code, not with the collection of data. This is why we have tight limits on automation and still have humans involved as a core part of data moderation. Further - telling people to trust the end result of manipulated data is part of the problem. The data should make it clear if it's trustworthy or not on its own. If the data collection and data processing is not transparent, don't trust it. If it is transparent, it can be easily reviewed and is more trustworthy.
  • Same problem with data provenance... How does one track back work from a sensor which is later discovered to be faulty?
    • The Safecast platform accepts raw measurements of particulate matter (PM 1.0, 2.5, and 10) as well as temperature and humidity
  • How do you ensure that you trust your own data? I.e. what is the process for that in terms of calibration and verification?
    • Every sensor is mapped and labeled and all data is time, date, and sensor-name stamped. Data that’s different from expected is not necessarily faulty so it’s not removed. If data is consistently out of synch with surrounding sensors we follow up with the monitor host to better understand what’s going on (was there a polluting event? Is the monitor dirty or tampered with?) Other levels of trust are tied to inclusion and direct participation: Safecast volunteers designed and tested our air quality monitors with publicly available components over the course of 4 years.
    • We do not calibrate our sensors to other systems. The Plantower sensors we now employ performed well in our tests, have been in use for many years, and are reliably documented. If a community is interested in a specific calibration we encourage them to do so, however, calibration should not stop communities from collecting data.
    • Since uploads to Safecast come from known sensors we do not need to re-verify this data. The data is also community monitored - sensors that are producing data out of norm encourage people to examine what’s behind the difference.
  • ​​​​​​​For the Safecast AQ Monitoring 2020 at public libraries, how were you able to obtain city permission to place monitors at the libraries? Was there any pushback? 
    • The LAPL program was developed with the Library to support groups within the branches. Branches were chosen based on librarians at those branches who wanted to learn how to support air quality monitoring. The only pushback we’ve had has been from buildings maintenance who’ve had concerns with roof access or placement onsite.  Additional point to note here is that we don't ever concern ourselves with getting city permission, as we are under the belief that people have a right to know what they are being exposed to in their own environment. We're happy to collaborate when it's mutually beneficial but we never wait for permission nor would consider heeding opposition if the community itself called for environmental monitoring.
  • ​​​​​​​Which brand air monitors could be used to add data to Safecast? URAD?
    • At this point the Solarcast and Airnote, two monitors developed by Safecast and factory produced, are the only monitors contributing to the Safecast air quality data set. However, the sensor and many of the components are commercially available. If the data conformed to Safecasts standards it could be included.
  • ​​​​​How are the Safecast monitors calibrated? What type of sensing technology is used?
    • Sensors in Safecast monitors are factory calibrated.
    • The current Airnote monitor uses a Plantower PMS7003 sensor, solar cell battery for charging and a pre-paid 4G cellular card. 
    • The earlier Solarcast monitors tested the Plantower PM55003 sensor and the Alphasense OPC-N2 for reliability and consistency. The Solarcast also includes 2 industry standard 2" Pancake geiger tubes produced by LND that are sensitive to Alpha, Beta, and Gamma particles with one of them being compensated to only detect Gamma radiation. 
  • Are the specifications for your sensors publicly available?

Mapping hyperlocal air pollution to drive clean air policies

 

Presented By: Harold Rickenbacker, Environmental Defense Fund

Summary: Lower cost air quality sensors are redefining the power of comprehensive spatial-temporal data. But while technology is advancing and creating new hyperlocal insights, cities are struggling to turn that data into local solutions that clean the air and improve local health.

Figuring out how to design and deploy an air pollution monitoring system, and then developing clean air policies based on the data, can be daunting. Environmental Defense Fund (EDF) will help guide local leaders to scientifically rigorous, meaningful clean air decisions, by giving a behind-the-scenes look at our monitoring efforts in pilot cities across the globe. 

Through partnerships with technology firms, scientists, grassroots organizations and city leaders, EDF has garnered best practices for using both mobile and stationery monitoring networks to inform land use zoning and permitting, implement emergency public health interventions, and advise the design of traffic management measures and transportation projects. Learn key takeaways from our work in:
 • London, UK, to measure pollution levels before and after the introduction of a new Ultra Low Emissions Zone;
 • Houston, TX, to identify elevated levels of benzene (~300 ppb) near petrochemical facilities after Hurricane Harvey; and
 • West Oakland, CA, to develop city-wide exposure reduction strategies, such as truck management and electrification, to benefit nearby port communities. 

EDF aims to create a resource center for city leaders and academics interested in using air pollution data to design new solutions, build political support for action, increase compliance, and hold polluters accountable. For example, our newly released how-to guide for hyperlocal air pollution monitoring (available at edf.org/cleanairguide) demonstrates how a new wave of transdisciplinary research bridging air pollution science, grassroots advocacy, and policy making and governance.

Part 1 Group Question & Answer


PurpleAir Discussion

Hosted By: Adrian Dybwad, Founder

Summary: Have your technical questions about air quality monitors and networks answered by Adrian Dybwad. Whether you're new to air sensors or you're running a network in your community, Adrian can offer advice and support of your efforts.

Follow Up Questions and Answers:

  • Is it an actual a calibration, as in you are physically changing something in the instrument? Or is it a correction, applied to the data in post production?
    • No correction is set per sensor, we just verify they are performing as expected
  • Do you routinely check Purple Air sensors against FRM sensors by colocating the Purple Air sensors with a regulatory monitor?
    • Lots of users have done this and you will find quite a few studies including AQSPEC.
  • Where do you find these data corrections on the PA website?
    • In the bottom left of the site, there is a conversion drop down box.
  • When you do inter and intra monitor calibration, what is the tolerance for you compared to the natural variability?
    • Each sensor should agree with others to about 80% on a peak reading over 500ug/m3. They must correlate to more than 90% in the trends during the test.
  • What’s next for Purple Air?
    • We are testing the BOSCH BME680 gas sensor. We continue to build out the web site and API.

Open-seneca: development of a low cost air quality sensor network and its implementation to measure PM2.5 in the city of Buenos Aires, powered by citizen science

Presented By: Peter Pedersen, University of Cambridge

Summary: Air quality reference stations provide data with low spatial and temporal resolution. They are also expensive, inhibiting their implementation in low-income countries. The design of mobile air quality sensors with a cost below £100 per unit is presented here together with the implementation of a citizen science monitoring scheme of PM2.5 in Buenos Aires, Argentina. During 7 weeks, 20 mobile sensors were used to gather over 400,000 data points across 3,500 km. Hourly mean PM2.5 values between 0 and 70 µg/m³ were measured and compared to a reference station. By doing a data baseline correction using different measures of centre in the data set from 15-minute periods, the method identified 20 pollution hotspots. Quadrants between 200 and 400 m2 with PM2.5>30 µg/m³ above the baseline can be visualized using a new methodology of interactive online maps. The data from this mobile sensor network is complementary to and enriches that of a stationary station. Insights on the added value of citizen engagement are also outlined. The expansion of these schemes offers strong potential for monitoring air quality in urban areas, particularly those that do not currently have reference stations and have limited financial resources.

Follow Up Questions and Answers:

  • What were some of the challenges presented by using bikes specifically?
    • Since we were using junction boxes, the mounting mechanism was not the easiest.  Ideally, we should have minimised the size of the sensor, and have done so for our current variant - but a lot more can still be done.
  • For the bike data do they take into account the time of day that all the data was collected? For example data during rush hour may look higher no matter where the data is collected.
    • Yes, the baseline removal would have taken this into account. If you wanted to see the rush hour areas, you would change the period of the baseline removal.
  • Does biking speed influence sampling flow into the sensor? Was the inlet designed or oriented in a way to reduce this?
    • Within cycling speeds (<30 km/h) we did not see any change of PM measurements with our inlet design.  The inlet and orientation was specifically designed to minimise wind effects.
  • How you can harmonise measurements between different sensor types.  For example, sensirion, alphasense, plantower, shinyei, sharp, etc PM sensors all behave differently compared to each other - before you even consider comparison to a reference technique.  How do you deal with this?
    • It’s very difficult to compare values from different low cost sensors - you would require colocating for long periods and deriving calibration curves for the common pollutant profile present in your area.  There are some recent articles which evaluate the differences between low cost sensors, and they conclude that low cost sensors from different manufacturers have their peak sensitivities in different regions, and so it’s not so safe to cross compare if they haven’t been calibrated.
    • We stuck to one sensor manufacturer, to ensure the data was cross-comparable, and only used the reference station as a method to validate the data. Open-seneca doesn’t currently aim to provide reference level data, but instead identify hotspots and inform policy which is entirely possible with low cost sensors.
  • Do you correct for air pressure and temperature, in considering concentration ?
    • No - we take the raw value from the sensor and apply the calibration curve from the reference station.  
    • Angela / Safecast: Not for Safecast projects. As long as all the data is collected and published openly, any logic can be applied after the fact. The problem lies in companies that don't release the data and / or only publish post-logic data without clearly and completely stating the logic applied.
  • How do you know the senserion PM sensor is able to detect near-field combustion particulate from tailpipes? (As opposed to resuspended road dust)
    • Not to my knowledge. It may be possible to derive something from the distribution from the bins (PM1,PM2.5,PM4,PM10) the SPS30 provides - but I don’t know how reliable that’d be.  I would be more inclined to recommend using gas sensors in conjunction to confirm if the pollutants are derived from tailpipes.
    • Angela / Safecast: This would not be reliable. With air sensors you need a sensor looking for every specific thing you want to measure. You can't reliably infer anything.
  • What will be the next steps on your research?
    • Expanding to more LMICs, and refining our hardware and data processing methods.

 


Development of a method for local health jurisdictions and schools in WA to use low-cost monitors for wildfire smoke preparedness

Presented By: Orly Stampfer, University of Washington

Summary: In WA, local health jurisdictions have reached out to the WA Department of Health (DOH) for assistance with making decisions about school activity restrictions, closures, and additional air filtration needs during periods of wildfire smoke. Through the Wildfire Smoke Impacts Advisory Group of WA, which includes multiple agency, organization, and academic partners, DOH developed school closure guidance based on indoor particulate matter (PM) concentrations, but there is not an established protocol for assessing this. Variability in school building characteristics and spatial distribution of smoke present challenges to DOH and local agencies in providing guidance.

The advisory group also developed a method for local health jurisdictions and schools to use low-cost monitors to estimate their own ratio of indoor to outdoor PM concentrations over a short time period, which was pilot tested at three sites. Longer term indoor/outdoor data in locations with expected periods of high ambient PM concentrations will be used to assess how the ratio changes with variability in outdoor concentrations. This relationship will be applied to the schools’ findings, so during periods of wildfire smoke the schools may estimate indoor PM concentrations based on outdoor monitors. We will present findings based on low-cost sensor data to demonstrate how the method of indoor/outdoor ratios may be used to inform planning for clean air school environments.

Feedback from staff involved in the pilot project will help us understand facilitating factors and barriers to using the monitors, and if/how it supports schools in preparing for wildfire smoke. This project is an initial step in augmenting state agencies’ abilities to provide guidance to assist in regional community decisions about wildfire smoke and protection of children’s health, and is a work in progress. This project also explores how low-cost monitors may be useful in characterizing and mitigating exposure to PM from wildfire smoke.

Follow Up Questions and Answers:

  • Is it possible the design of the HVAC system had more of an effect than the filtration? Did you correlate ratio with HVAC type and building age? Or how will the ceiling height affects the air quality indoors? Did you measure Outside Air coming through the HVACDid you correlate human traffic with the sensor results in the various rooms?
    • In both the pilot project (5 schools) and the planned follow-up study (2 schools) we will collect data on the HVAC system, filtration, building age, room characteristics, windows and doors, average occupancy and movement, building/room hours of use, and indoor sources of PM (i.e. cooking, cleaning, printers). We expect that this information will be very important to our interpretation of the indoor and outdoor PM data. We could qualitatively compare the indoor/outdoor data between different types of building characteristics, but the project and study are too small to draw any statistically significant conclusions about comparisons between building characteristics. We are not planning to track the number of people moving in and out of the various rooms, but we are planning to compare unoccupied hours to occupied hours. We are also planning to compare the ratio with all data included vs. with peaks of indoor-generated PM removed.
  • What kind of special considerations are there for measuring indoors against outdoors? Does humidity play a huge factor?
    • Humidity and varying PM composition between indoors and outdoors can impact sensor readings and calibration. It will be important to assess how calibration models for the low cost sensors perform indoors and outdoors against the gravimetric samplers in the follow-up study. Another consideration is lag time for outdoor PM entering indoors.
  • What are your thoughts on the very wide variability of I/O ratios at very low outdoor PM concentrations (per the graph it looks like the ratios span the full range from 0 to 1)?
    • The variation in the ratio at very low outdoor PM concentrations reflects the potential importance of variations in indoor sources and/or ventilation conditions when outdoor air pollution is low. With a small denominator in the I/O ratio, the I/O ratio will be more sensitive to variation in indoor concentrations. In our study, we're most interested in using the I/O ratio as an indicator of infiltration, and so we're most interested in cases when the outdoor concentrations are high.
  • ​​​​​​​Do you have feedback (Surveys) from teachers and students on where they smell woodsmoke inside the school during events?  Do you think their noses could be a useful indicator of where penetration is happening indoors?
    • ​​​​​​​We don’t have surveys or feedback about where students and teachers smell woodsmoke. I think that would be a really interesting engagement tool if the students and teachers were enthusiastic about it.
  • ​​​​​​​Have you explored the effectiveness of vegetative buffers around schools? *the effectiveness of vegetative buffers and PM removal
    • ​​​​​​​We have not explored this. Thank you for this suggestion. As I mentioned above, this project and follow-up study are too small to draw conclusions about how building characteristics (including presence of vegetative buffers) impact indoor/outdoor PM ratio, but we could make qualitative comparisons and building characteristics are important for data interpretation. I will look into adding vegetative buffers to our list of information to gather about the schools.

Part 2 Group Question & Answer

Your Additional Questions Answered!

  • How much of your grant money actually go back into the community? Most AQ monitoring (unfortunately) needs to be conducted and driven by low income communities. Cleaner air is great but is there another way you are pouring back into these communities with things such as pass through grants?
    • Graeme: We agree that low income communities too often bear the burden of having to advocate for and address the environmental inequities in their own communities.  Our agency has identified four "focus areas" that experience poor air quality, elevated health impacts, and sociodemographic barriers to clean air and focus a lot of our efforts in those communities.  When administering air quality grants, we try to involve the community at each step of the process, but not require them to do any work they don’t have the capacity or desire to perform.  We are also working on a Community Generated Projects effort to provide funding to community groups directly to help improve their air quality.
    • Orly: I also agree that community partners should be financially compensated. Not only for AQ monitoring activities, but also for the time spent collaborating on other steps of the process. I don’t know if this will be possible for all of the projects I’m involved in, but for two projects community partners were included in research grant budgets according to their total time spent on the project.
    • Angela/Safecast: All of our grant money goes towards our work and all of our work is driven by Safecast volunteers. We aren't in the position to fundraise for other organizations at this point. We have grant-partnered with community leaders looking to raise funds for their own environmental monitoring with Safecast. An example of this would be the LAPL project.
  • Do they approach the community through surveys to decide the locations?
    • Graeme: We work closely with the communities in our jurisdiction to help meet our air quality objectives.  Depending on the purpose of the sensor, we may first identify a geographic location we are interested in then canvas the community for locations or we may ask the community which locations they think are important to monitor.  We also work with libraries and other public agencies to host sensors.
    • Angela/Safecast: Generally, communities approach Safecast and already have an idea about their needs and desires for AQ monitoring. This is part of our “Pull over Push” philosophy.
  • Are there PM1 sensors being considered? And what is the experience so far with security of distributed sensors and damage caused?
    • Orly: So far only 1 of about 20 sensors I’ve used has been damaged, and that was due to placement too close to a sprinkler we didn’t see initially.
  • Are there PM1 sensors being considered? And what is the experience so far with security of distributed sensors and damage caused?
    • Orly: So far only 1 of about 20 sensors I’ve used has been damaged, and that was due to placement too close to a sprinkler we didn’t see initially.