"Title","URL","Text" "Green Teams in Hospitals - Video Lecture (Sample)","https://www.youtube.com/watch?v=FrhauKS5aVc","So a green team in itself is a team of motivated members that are working together on sustainable initiative within the hospital. So for example, we at the operating room the department have a green team with surgeons that are involved, managers that are involved, nurse, the set already involved to help everyone together to implement sustainable choices. And for the LUMC green team overall, we have this network to connect one and other and to help each other with the barriers there, they are having or maybe to help them with new ideas. I would advise an organization to set up a green team because I think it's important that we accelerate this transition. And I know from practice that when we have a green team, things are happening quicker. So for example, what we want to do is we want to, instead of using a lot of disposable, we want to use more reusable. And I've seen that when you make one person or two people responsible for this task that it happens quicker than when we say, okay, we want to change to more, or using more reusable. And within a green team, you have a few members that have that motivation. They want to be responsible for one certain task. And in this way, a green team will help to, yeah, quick and distransition. In our, or our green team, we have four different pillars. The first one is reducing waste because we have a lot of waste that we're producing. The second one is focusing on disposable and reusable, because we want to use more reusable instead of disposable bowls, because we know that has a less environmental impact. We also are focusing on reducing the amount of anesthetic gases we're using in a national anesthetics. And the fourth one is also reducing the medicine residues. So you can focus on these pillars too, and also have ideas within these pillars, which you can focus on within your green team. The last thing that you can do is you can focus on more long-term goals. So for example, you say, well, in two years, we don't want to use any reusable anymore in our department. And then you do everything, and you make up for you. Come up with ideas to make sure that you achieve your goal." "Recycling in Hospitals - Video Lecture (Sample)","https://www.youtube.com/watch?v=8hfq0Psh9yU","In this video, we show how circular strategies lead to the reduction of waste on the one hand and the creation of raw materials on the other. In collaboration with various hospitals and industry partners, we are investigating at Delph University of Technology, how we can use technology and science to turn medical waste into product. As a sustainable hospital, Modestad Hospital wants to have waste converted into new raw materials. In this video, myastad is the supplier of new raw materials with its waste. Normally, a lot of waste comes from the operating room, such as wrapping paper from instrument sets. The waste is collected in the operating room and is no longer thrown away, but recycled. The waste that is normally incinerated is now collected for further processing. The waste is transported to green cycle, a field lab in the Netherlands, where it is processed into a new raw material. The wrapping paper is melted down at a high temperature in a special melting furnace to a solid block of material with a very high period. The block is further processed in smaller pieces and granulated. The raw material is used to make new medical products via a process of injection molding. In this case, the Go Jack instrument opener. As a medical device, the instrument opener requires sea approval. It has been extensively tested on mechanical properties and is used as a medical device in a central sterilization department of myastad hospital. This is an example of the circular economy in which myastad hospital uses products made from its own waste. In this way, we make the world more sustainable and become an example for the circular health care economy." "Climate law and Green Deal in Healthcare - Video Lecture (Sample)","https://www.youtube.com/watch?v=EkZGktwxu-M","The number of humans on our planet has grown tremendously from 1 billion in 1800 to over 8 billion in 2022. A true population explosion. While the Earth's population grew, so did our Mars Consumption Society. In health care, we experienced also a significant growth in number of patients and medical devices. After the second world war, we faced a huge global economic development with great technological technological improvements. These improvements were also implemented in the field of medical devices. The technological instruments improved. Endoscopes were fitted with 4K chips. Energy devices with complex designs were introduced and robotic surgery platforms entered the market. Medical devices made out of more different mixed materials replacing reusable devices with single use products. The hunger for plastics and steel grew as all products has to be manufactured in ever increasing volumes with more mixed materials and rare earth metals. The introduction of single-use medical devices, the so-called disposable in combination with mixed materials propelled waste streams from hospitals. Where high-income hospitals used instruments for many years, they now disposed high-valued instruments after every surgical procedure. To make people healthier, hospitals use and consume a lot of energy, food and materials. Healthcare in many high-income countries are responsible for 6-7% of all CO2 emissions and for a considerable amount of waste. This makes healthcare one of the most polluting sectors. The Green Deal healthcare contains agreements to make the healthcare sector more sustainable by reducing CO2 emissions. The healthcare sector could reduce carbon emissions that occur in the global production chain of medical goods and pharmaceuticals by applying green procurement strategies. Suppliers and healthcare organizations have a responsibility to introduce green strategies. In terms of their impact on people and the climate, they have a corporate social responsibility to help to achieve the ambitions set out in the Green Deal." "Medical Technology and Sustainable Healthcare | Online Courses","https://www.youtube.com/watch?v=qSc__aJtF5M","For health professionals to meet evolving needs and challenges that are as diverse as the communities they serve, staying ahead is not a choice but a necessity. Our courses help professionals like you to create and work in health care systems that deliver efficient and high quality care in an affordable and accessible way. Ultimately helping them save lives every day, everywhere. Out portfolio is specifically designed for professionals such as doctors, educators, hospital staff, medical specialists and engineers. We work side-by-side with partners in hospitals and in the medical industry to code your fellow courses and ensure they reflect and respond to the latest needs. Academic knowledge does cause hand in hand with the latest research and technology called fances. We provide knowledge and tools that are globally accessible and locally relevant. This is especially significant for the countries where the need is high and the resources are limited. Our courses are available online increasing worldwide access for 1000 of professionals. It's not just theory, it's about creating real solutions. For instance, we help healthcare organizations reduce waste in their supply chain and apply circular principles. We focus on better care for patients by mapping their experiences and perspectives and by developing methods of analysis that are faster and enable better interventions. This is because we are committed to making a positive impact on our environment and on the sustainable provision of healthcare for all, everywhere. Are you ready to embrace the future of healthcare? Explore our courses. Your lifelong learning journey starts here." "Webinar: Strategic Planning and Re-Designing Deltas - Interdisciplinary Approaches","https://www.youtube.com/watch?v=s79tuBG-qck","Welcome everyone and my name is Angelica Lentius. Welcome to the webinar hosted by the TU Delft, a Sanchez School for Continuing Education as part of the Delta Week events, leading up to the celebration of our universities and 180 seconds, Dia Snatales. I would like to give the four two Leiong-Wuow introduced himself, our speakers, and the topic of this session today. Yeah, thank you very much, Ashileek. And welcome everyone. Thank you for joining us for this for this webinar. My name is Leiong-Leion Hermons. I'm an associate professor in Water and Environmental Policy Analysis at Delft University of Technology and also at IHE Delft. And I'm really excited that we can spend the coming with each other to exchange ideas and talk about strategic planning and redesigning Delta, especially interdisciplinary approaches for doing so. What we will do in this in this in this one hour webinar, I'll give you a short introduction to the topic and how we sort of decided for the topics we would like to discuss with you in this webinar. But then we'll give the floor to our speakers who I will introduce to you during the presentations, even including the introduction. Please feel free to type any questions or comments or thoughts that you would like to share in the chat that will enable me to see them and to raise them to our speakers for you. We will do the discussion at the end of the presentation so that we can also hear combined views from both the floor and how long for you. But please feel free to type your comments and questions also out the webinar. And also please be aware that we are recording this webinar. So that you are aware of this. But then to move a bit to the topic of the webinar of strategic planning and redesigning Delta, why do I think that is such an interesting topic? Well, it's because Delta are complex systems and that means that we cannot design a Delta. Like we can maybe design some other engineering or technological artifacts. It also means that Master planning in a Delta is something different than maybe we are traditionally used to. But at the same time, of course, planning and design are still possible and are very important for Delta. And we have seen various new approaches emerging in the past few years. And what I think is an important aspect in many of these approaches is that they pay much more attention and make much more use of systems thinking. But also pay much more attention to the roles that stakeholders or actors play in Delta planning. So that is also what we decided to focus on in this webinar today. And of course, if you think about stakeholders or actors and systems, then I think nowadays the consensus is with most of the experts in the water and Delta domains that we would want to have an open and open dialogue and inclusive process and take into account the needs and the positions and the possibilities of all critical actors in our Delta plans. And we also should pay attention and integrate as much as possible. All this relevance dimensions of these complex Delta systems, meaning that it should be cross-sectoral, but also multi-level. And that is of course an ideal that we talk a lot about and that we hear a lot and I think many of us agree with that I certainly do. But at the same time, in practice, of course, integrating everything and including everyone is not possible. How would you do that? So instead, I think what we try to do is make sure we have the most important elements it included in our Delta planning and design processes and products. But then that still leaves a lot of interesting questions at the table, I think. How do we decide who are the most important stakeholders and that may also differ from one Delta to another. It may matter if we are working on a design for a more resilient Macong Delta or for the now-based Delta for the Yala River Delta in Kenya that ends up in Lake Victoria or for the right-news Delta in the Netherlands. So which stakeholders should be included? Who are the most important ones? How? Should we include it? But also what kind of stakeholder considerations do we need to take into account in making an implementing Delta plans? And of course, the stakeholders change over time. And something similar applies to system aspects. What are the most important system aspects? What are the boundaries for for planning? Making Delta plans is that maybe the urban port area that at the end of the Delta is that including the hinterland is that also even including transboundary aspects or adjacent river basin Delta and how then do we still make sure all of this is connected if not integrated, at least not in contradiction with each other. And how do we link these actors and these systems? Because of course, here we present them a little bit as different things but as I think we will hear in the presentations by by by by the end by floor. They are in fact almost impossible to separate. So they are connected and they need to be connected. And then on top of that, of course we are planning under exciting new developments, both desired developments maybe and sometimes on desired developments. And we see new opportunities arise in new technologies, especially maybe much more nature-based solutions but then combined with more traditional hard infrastructure solutions, different financing options. All of this is still partly at least uncharted waters. And the only thing we do know about interventions in complex systems is that they always will come with surprises and with unintended side effects. So that makes it really interesting I think to talk about how do we deal with stakeholder aspects and how do we make good use of system thinking approaches for redesigning Delta. And I'm very happy that we have found two experts on these topics to tell us a bit more about their ideas about this today. We will start with a presentation by Dr. Howlong-Feed, who is a senior consultant and Vice-Managing Director at Ancety, which is a consultancy company based in Asia with offices in Singapore, Vietnam and I think in Indonesia. But Dr. Feed has actually a very rich career spending different fields of work and different also places of work. So in the past for instance, he has been also the director of the Center of Water Management and Climate Change and the professor at Hotsieman City, National University or sorry-Vietnam National University. But also before that played an important role in flood control and advising on climate change for Vietnam and for Hotsieman City. And Dr. Feed will talk to us about stakeholder dynamics that are especially important for long-term Delta plans. And then after Dr. Feed, Dr. Florch, Florch in the Hotsieman City, will present. And Florch is a colleague of mine at the Policy Analysis section at TU Delft. So she is also a policy analyst with a particular interest in the role of systems thinking for generating innovative and creative solutions for wicked problems. And also very interesting is that she has taught and developed courses, also online courses for instance room for the river, perspectives on river-based and management. But also recently with colleagues published a book on complex coastal systems, transdisciplinary learning on international case studies that I think offers interesting insights. And I hope we will hear at least a bit about those in her presentation. And I think with that I have introduced what I wanted to introduce and I would like to to invite Dr. Feed to start his presentation. Thank you very much. So what I'm talking about is mostly based on my experience. Many years on Dr. Sautoncy advisory and education. I see a lot of lessons, a lot of cry on air, a lot of unanswered questions. And I try to develop a framework to explain the things and today I want to share a quick view. So you may wonder a lot of questions you might why I started a project delay on plan very well, plan not implemented at all, or even bigger, a problem like how come the populism or nationalism rise? On the democracy, donors on-way prevent social conflict. I can see the very different thing that seemed to be far away from each other. But according to my observation, they may share a course beside their own problem. They all affect the nation as simple as technical decorporation. So I was starting with the motor framework that can help as plan what happened after the process of planning. Usually the stars start with a trigger. It could be social, economic, environmental, physical, political, military, and so on. A lot of ties are thicker. The trigger can make close again in financial and technical institutional social, economic, and I call all of them a classifier as the ability of a stakeholder. And then as a result of most emotional and reasoning, we come to strong reception, the strong enough of reception, of opportunity or threat. And that is the reception is not so we may have motivation to do something and that motivation, it could be negative that we oppose or mostly support. The past see, okay, let someone have to do it, for actually we have to do it. Depends on the stresses of your perception and your ability. And then when motivation and ability come together, so we go to action. So the action planning is the very crucial step of any planning. At the time we have to decide what, where and who, where, how to do the thing and to make it happen. And that result in implementation and also, of course impact. The impact may relax the original trick somehow. I would call it the collective impact. So this part the original framework, more that we publish back in 2015. But then after that, I don't think that's enough because this framework can be applied for a project for a limited scale plan with a limited time. But for the long-term strategy planning, that very important factor that impact on individuals or on each stakeholders. Because the impact is may not be the same for all stakeholders, relevant in this implementation. You can see the pre-factor may change along the course. Click the chance ability change and motivation change. And of course, next action, next step of the roadmap was a chance to create another impact. And so the result that a look of what I call dynamic motor. So now you know, motor is the abbreviation of motivation and ability. That the two important factor of any action to happen. So that the basic of the framework. And you can see the motor framework can have improved maturity of policy or plan to identification the priority in action plan. Even right to estimate the sustainability of an implemented policy of plan. So now we come to the crucial part of the procession is the stack code. According to our framework, the classification is included form and group of actor. The actor I act as actor at B actor. So the actor mostly the act butch. And there ability mostly two data, metal, calculation, whatever. And motivation all the best before me course. And they mostly don't care about the answer about the side effect. I call it the motor. And for a actor, so there ability also defied by legitimate power and the motivation also different from the team of the and as motor, the ability and motivation of the community of the mass population. And what they have is just the power of the mass of the mass. But motivation they just focus on the mostly personal interests and brief, brief, brief sector. That be more tough. So actually one actor can combine some of the some of this ability but not typical. Just one typical for other class. And depending on the plan policy and the stage there will be one list of code. For one stack code group that the most important actor will be there to make the action happen. But because there of the limited ability, they had to establish a coalition with own relevant stack code with the answer to gain the complementary ability that he needs and also to read the confrontation. Here is some example that the group case study we did in the Macau Delta about 10 years ago. Or you can see the as group, the group T group and I group. And the one had the high motivation, also high ability is this group. So they are at least that code. But the rest still have another situation. So they make a different. Yeah. And our design, our planning that have to make the transition from the lower path to the highest path. Try to move the stack code to the highest position. The height. We can do it by considering building or capacity building. Mostly. So you can see in the case, not the old capacity building can be useful. Some types of buildings are much more important. Yeah. So with the framework, we can customize and capacity building, because they're building by which so that we can improve the maturity of a plan to be implemented. So the package should be adapted to new situation. Because the positions of this group can be changed over time. It's the example of a complementary ability example. Because it's the Macau, the preview base. Upstream is the lot of them. Them types of new in China, laws in Cambodia, but the Vietnamese Vietnam may come delta. And how can we deal with it? So far, the, um, usually they can use the institutional tool. Try to make a deal of a primary, what they should. But no use, because after the decades of negotiation, no result at all. Why? Because, you know, there's a different motivation and a bit from upstream, country and downstream country. And Vietnam, Macau Delta has no play a good cup to play in the negotiation. And what they have to do is to strengthen their motor. And look at that, the watershed resupplining for the Macau database in for the dominant act of upstream is the mostly A and B that mean the authority and perfect sector. And that my conflict and confrontation among country and community. So therefore, the downstream country had to build up T, B and S motor. Because so far, they just use the A motor to deal with the upstream. So what we have to focus in is not continue, which one, but try to strengthen this one. For example, if we managed to make a resurgence of water related life, we would have a downstream community for the people in the Macau Delta. So I've led very good leverage to deal in the trans-voterly negotiation. So in the example, I can give you the how to identify the missing. For example, in the climate change adaptation policy. So so far, the elite group actor is scientists and government with some effort. But most as and B actors still outside. Because for example, the you know, direct and current loss of some social group. And low B and S motor may thrive resistance and population. For example, the order related industry job or conversion car industry, for example. So they try to apply some incentive for EV, but still beyond people still left behind. And you can see in the on the picture on the right. We're talking about the picture. They just showed a very, you know, a residity with the broad brushless landscape. But in fact, what we have to do is strengthening the S motor. We have to share the benefit of a adaptation policy. Like more climate funding should go directly to people in poverty. Not go to incentive for the people who have to buy electric car for example. And here, you can see the stakeholder correlation dynamic. It's not from the left side. The trick now is the water scarcity. And the two options could be available. Race before or redeemed or by for the station. And then the stack of the A who support the race for idea may form a stack of the A1 to be correct. And then the farmer who get the water for the irrigation. That the motor again, the motor again, as I too. And then the value further to stack of the A3 with the whole and robustness complex. But for the opposite me, reservoir could be replaced by forestation to reduce runoff and to keep it in the basin. And then the solution is forestation, not be or that. Because when you build a dam, the migration may happen. Because if you have to relocate some people from native people within the reservoir into a not the place and then refresh station also. And then that may form a not the stack hold this let's take a seat with the social and environmental activities of pre-inpolitics and then the results of most conflict is the social polarity. And then that the trick the two for or not the planning, not the failure of planning. So just start from a very simple one. I would come to a big one. But the very big question is the okay we have the very good strategy vision. I'd like to inform you that the 15 minutes are gone. So please feel free to finish, but also mind the time a little bit. Okay I need to adjust one moment. Okay we can we have to transform from our long-term vision to the current context using action plan. And how can we do it? We can use a conventional SWAT method but separate the future and present. And then we have a lot of way to go. Yeah. And even with threat up to go to the long-term vision. And I develop also a not the tool to analyze the action planning tool, we're quantity IFF that will impact score and feasibility score. I need to see on the table. And with that we can use it for the action prior regression, resort allocation, or 25 most action conflict or actor dynamic forecasting. I use for this I analyze this side-on-dong-na river delta and explain why this solution had been chosen. Set conclusion. You can see dynamic mode can help a plan and help you to decide the action plan. So that can satisfy the maturity and also visibility of the roadmap. So thank you for your attention. Thank you very much, Fiddard. It was very interesting and sorry for cutting you short a little bit but I think we have enough also food for thought and and I think also for discussion. So I would also like to like to again invite our audience to type their comments or questions in the chat in the meantime while we continue listening to to to to flourishing with a presentation on in which system thinking is a bit more central. So flourishing the floors yours. Can you see? Are you seeing this light? Yeah, yeah, this is this is this is great. Yeah. Okay guys we practice but it's still it's not different than it would do. We practice. Okay, I hi everyone. My name is a floor chest. Thanks Leon for inviting me. Leon asked me to talk about systems thinking and redesigning Delta and and I thought that's easy. I just don't have to talk about actors and I found out in that interesting exercise for myself that I can't decouple systems perspectives from actor perspectives and I hope that after this presentation you know why. So why we see these systems perspectives and actor perspective as so complementary. I Leon already mentioned it but I work at the policy and Alice's section at the faculty of TPM so technology policy management and everyone in our faculty is in some way I think concerned with how humans interact with technology or infrastructure or the environment. And policy analysis is relatively small field. I'm not going too deep into what it is but it might be nice for you to realize that we I see it as a rational systematic way to support policy making processes. So whatever activity you do that in the end will result in a better policy advice or maybe even better decision can be constructed as policy analysis and policy analysis is usually a sequence of steps depending on who you are asking in this figure. There's a few definitions but but all of them start with okay what is that problem? What is the system that that problem exists in what do I actually want? So what are my objectives? What do I need to change and then you find alternatives? You weigh them, compare them and based on your analysis you decide which one is the best one and you give advice on that. For example and what strikes me when I was starting out my research is that all the sequences of steps they they set they say oh well compare these alternatives but I know no one really went into what what do these alternatives come from. So who is designing here and why and are we not why are we not getting light as policy analysis on this design phase and can we do that in different way or can it bring us something. So I started out with years ago with a PhD I wanted to go design in coastal systems so with people and I built it on anchored on systems thinking meaning first I earlier in my research I try to understand what the system was and how things are interlinked I'll tell a bit about that later how you do that and then I figured okay well in these models the system models that I make I make assumptions and can I not use expertise of others of other other researchers or maybe expertise of people actually living in such a coastal system can I use that to enrich my own system understanding and another step further is can we then design together and find different solutions so that this was all these were the questions I was struggling with and maybe still I'm still struggling with um first of all what is the system right but what are we talking about when we talk about systems and again you can find many definitions but know that we systems are constructed and they are part of reality that is being studied as a result of their existence or suspicion of a problem so problems and systems are somewhat related they systems um um are built from system elements and inter connections there's also system boundary and whatever is outside the system boundary is not in our system and because they're constructed because we make them we get to choose what those boundaries are and what we include in our analysis and what we exclude what is the environment that influences that system and so that systems and complex systems laying on already mentioned that we study complex systems complex systems are the systems that are so complex that you can't understand the behavior of the overall system even if you would know all the parts together so they behave those complex systems behaving in a different different and more unpredictable way so really interesting right and because you have a choice as whatever you are if you're a project manager or if you have a project and you have a complex delta system for example you get to choose what you analyze in that system and I chose I'm looking at social ecological systems so I look at systems in which I look at the biology of physical environment and natural system so if I was actually interested in coastal system but you can also think of similarly like you can think of delta systems in which water and nature the physical part of the systems are important but also humans so humans interact with that environment and you see you can think of interventions in that space would be technologies or dikes or infrastructure but the physical environment also responds back to human system so maybe this is fake maybe you think oh this is but very broad definition of what such a complex system is and that was more or less the intention I kind of want to load okay what is what are we looking here as here for coasts and delta's and you can imagine that because we get to construct these systems ourselves you have different system perspectives so if you have a little small coastal issue for example here erosion on Tesla island the first barrier island in the Netherlands then your system your scope or geographical scope is much smaller than if you look at another problem such as the high water a few weeks ago this is from the Dutch National News and you see where the water was high or extremely high and if this looks a lot wider the broader the system perspective here but still you see that the Dutch News took the perspective of the Netherlands so we we all know that the Rhine doesn't start here it starts somewhere in the ops but we didn't consider that so there's always something outside your system and something in and you get to choose and similarly to those spatial scales that we use we can also consider interest are we looking at only flood risk if you looked at the news the last few weeks it was a lot about high water flood risk but also about nature and we heard foresters say hey those high water managed floods was actually quite beneficial a blessing for nature because it will increase by our diversity in the long term because this part of nature gets more water than they initially expected and in addition to that you can think of other aspects that you can make your system define your system in your jurisdictional levels temporal scales certain people look for instance engineers and built the dieg for to last 100 years but an agriculture someone who lives in a coast or so might think of the seasons or of the Fenia they're kids sorry they're kids so there's different there same goes for governance layers there's European level the national level regional level and all these perspectives and things you can choose but also add some complexity to that system that you're analyzing and the question now for me was okay does that then affect design is the system that we define in the beginning how the fact solutions we find for the problem later on and the line of thinking here is if I care if I stereotype it's similar to if you would think okay an engineer really designs for safety like their primary objective is flood safety or flood risk prevention then you can imagine that they would design a very high concrete dieg for water and that's but if you would design for also nature you can think of more dynamic solutions such as sediments similarly we saw this shift in the 20th century in the Netherlands in water management where first flood prevention was much more important and later stakeholder interests and also nature conservation became more more important and was the value different by society even I just want to say that if you're interested in how these perspectives can alter design there's a few moves that I know of that image you practice that so you get to design a particular particular problem and then see how it changes if you do it according to different design principles there must be more I know of these maybe laying on lay on concepts on life from that or to the UDL website later another example in which we see that perspective can alter design is for instance in the room for the river program and this is a famous example in which you see him here on the right hand the object of our interest in the initial design the designer set okay well we need to move this agricultural person this farmer is company because this part of the land should be able to be flooded by the river so you need to move and then there was a community based initiative in which the farmer proposed well I can stay there I can allow for the higher water I will take that risk and I built an elevated bit which we in touch would call a turpe but in a small land elevation here and in case of high tide over extreme high water I will go there and my cattle will go there and it will only be a few times per decade so that's fine with me and you see there that the community perspective brought a new solution that the designers initially hadn't considered because they had assumed that that person would not want to do that and want to move entirely so the question if you're then designing or trying to find solutions is which knowledge do you include and I'm framing it here in a interdisciplinary way and that means that you use scientific knowledge which you always use if you're looking at such complex delta systems because there is so much expertise about can like think of it like hydrologist ecologist biologist they all know little bits about that delta system of a government experts so you can imagine that you need that knowledge and you know that it's somewhat scattered so there is a whole process of who I'm getting and what knowledge am I using and on the other hand there is a societal knowledge space so actors and and their interests are there and you can we now know that the community can also bring knowledge and no stuff, no things about their environment that maybe are said in a different language or in different formalism but are still can contribute to getting all right solution or good one and you also I saw this in a case that I did for my PhD so it was called Coco Channel project because we like nice acronyms was a local erosion problem on tessel if you would look at it purely for the law and for flood risk then there was too much erosion that was allowed by law and then you can do if you look in a particular system perspective which is somewhat narrow you can think of I will either we have to change the rules on the law or we have to nourish more and we did a whole participatory go design process and we found there was that the willingness of the people living there and earning their livelihoods in particularly some beach pavilion so people earning their money from from tourism there or actually willing to move if they were financially compensated and then we so but that solution was then not considered before by the government authorities because they had ideas about fairness and equality if we give this person some money to move should we then also give other people other actors money or similar compensation so there was a whole assumption about values and equality that was not but which didn't come up in earlier analysis and in this in this research you're saying there were more solutions and you can frame that as a solution space and if you think that design is choosing which I think it is in a way if you design something you make choices you make a set of coherent choices in a solution space and if you can imagine it like this if you have only one choice like if it's the diek medium or high or what what height is the diek then you get you can make that as a one dimensional space with which is a line right and then you can slide and think of that choice and if you have to make more choices which usually have to do then you create a space which it will also dimensional space which I had tried to draw like this on the slide and as such design or code design is a way to identify trade-offs if you do a participatory with each other and you see if you do that then you will have to the the price of that is that there's this other thing and if you so you make trade-offs and you can identify trade-offs if you would only ask them which you like nature which you like this and say yeah I want to save the idea I want nature and I want it cheaply so those trade-offs really become become clear in designing or designing together. Similar to fear I'm going to be your talk but we are past the 15 minutes so please finish but finish. So mind the time and that is it. I've finished as a gift slide in which I try to stay how we try to compare different gay studies around the world. I wanted to highlight an example from Ireland in which we also saw that if you include community stakeholders they actually really wanted to be engaged but they didn't know how to access the policy making process which I think is a nice example also from feast talks how do you even if they have the capability how do they get access to policy making space and hear the solution that they identified in in a research was that the course of actions you'll be bottom up and community integrated. So the solution was in the social subsystem and not in the technical solution for water management problem. This is my last slide. If you can remember one thing think about okay systems are constructed I construct systems so what am I including in in my initial analysis and that will have effect on the solutions you'll find later on. That's enough for me today thank you. Great thank you thank you Florchin and I think you I think we didn't mention but I think it's very clear from your presentation your introduction but also this this everything in between till this last slide that you actually not just work for tea yourself but for the multi-actor systems department to you. So actors and systems are well well connected. If you if you like you can stop sharing your screen. Then then then we have some time for for discussion for questions and answers. I haven't seen a huge number of questions yet in the chat so I would once again want to invite everyone if you have comments or questions please type them in the chat we have some 10 minutes to discuss so I think there is some room to address some of your concerns if you have them. I found it really interesting to see that with your presentation Florchin we got into actually again also that not just the systems but the stakeholders and how actually working in in systems thinking with stakeholders helps to shape the solution space but also maybe broaden the the the the set of alternatives where as I think the approach that that fee was was presenting on the dynamic motor is it's a bit more of an analytic approach to to capture what what is going on and then based on that maybe also think about your your interventions. So so I think that's that's a nice nice complementary and different ways to to to to look at fairly similar things how to actors and systems in insturtegic dealt a planning play your role and influence each other maybe something you would want to reflect on but but first I saw that there was one very clear request and for for for fee to to if he could again show the action planning to maybe answer league can put that slide up and then maybe I think it was Mr. Machiaa Anno while well fee maybe you explain a little bit what is on this slide and while you do so for me please also add then how I mean have you used this already in practice and if so how and how did others respond to this like maybe stakeholders in government that you have worked with maybe you could add a bit on that while you tell us a little bit more about this action planning to yeah that may also then allow Mr. Machiaa Anno to to make his question a bit more precise and then type it in the chat. Okay thank you then so I can see on the table it's a horizontal scale is the multasco. multasco is the products of motivation and ability so it is based on survey based on survey so we have to to conduct a lot of survey and especially the social survey very big for example for the Macon Delta we did a thousand questionnaires just to ask people so question just to understand their motivation then what what they want and what they can and what put me the complementary ability that they need something like that and also we have to for the provincial planning or training so we we talk with a lot of private sector hundred of them to see what they want to be in part of this strategic planning to be contribute to division that the multasco but then again multasco is not you know I introduce a not a squad that we're going to impact squad the impact here it could be negative or most of results of a specific action if we do something so the outcome may except on some impact on relevant stakeholders and impact is not good at all and then with this scale it's a very simple scale of minus two up to maximum two and here again minus two two sub-voice div and negative and if you can't collect it the product of this she you can see the result minus two minus two is four at the same for happy number in the table and and we do do you have some results for for using this method or or the earlier versions of the motor method in Vietnam that's another question yes yeah to understand the chat that relates a little bit maybe to this slide that we're looking at yeah the multasco we have to go back to the not as slide that to how to show you how to calculate the multasco but the thing um that uh just a calculation based on your scale set in questionnaire but the required lot of machine at the other side is that coded and the e-packs call you have a loss and again sub-rom loss and again you can determine the dynamic of actor so they may lead the group or the other on a not one may join the original group so that dynamic can be explained by the impact factor because yeah and based on that we can uh have a three application first if you look at the bottom rise corner that meet you some guide what you should do and what should be the priority and if you look at the upper left corner so you can see the opposite don't and loss because that the actor may be act maybe maybe the counter actor and then maybe their potential confrontation yeah and and I think maybe the the question was also interested in in fairly practical application and I am aware of maybe not that not the newer evolution of the dynamic motor but more traditional motor being applied in in studies to I'm not sure I don't recall the exact details of the studies but commissioned by the World Bank for instance in the Vietnamese, Macong Delta but also actually taken up by by by a research consultancy group from Australia our working in the United States but I would like to move on in in view of time because I see we we almost are hours finished so uh actually thank you for sharing the slides and I would like to move to another question which I also found quite intriguing which is how do we which is posed by C. there are no apologies for my pronunciation or for for your name but about community perception and whether this is now a one or a two two way issue and and how you you you deal with that and maybe also how do you deal with the fact that maybe that communication about the perception is in in a way of course never ending and I was thinking maybe if you could reflect a bit on that yeah I think um I am reading also the question how can we convince the community for moving out of the flood zone and I think um what I saw in so in just two examples I gave it there are more examples that I wouldn't want in the test so why you see that um the resistance to moving is not as big as we would have initially thought so local stakeholders especially we see this especially in for people who live in or near esteries the data so they have contact with water and they are quite aware of long term risks and also willing to to move so I think that assumption that they're not is maybe too strict. Okay yeah thanks and actually as I see that that that the questions are are quite numerous now and and some I think are actually also quite quite hard to to answer some for some we may be able to to share a few pointers to do some further information on on both of your approaches and presentations but with that also looking at the clock I think we have had a very rich one hour and we are we are getting to to to a close so um so let me uh close by um by by thank giving a really warm thanks and a plus to to Dr. Florge and Dr. Fee for sharing their their knowledge with us your your views and while while doing so also learning things yourself that is I think always the best way and to thank the audience for for joining for participating for your comments and questions and also I saw in the chat for actually also answering each other's questions that is even the most useful so I'm really happy to see that this was a quite interactive webinar this way um the webinar is is hosted by the Teo Delft Extension School for Continuing Education and at the end of this webinar I would like to to to bring your attention to the fact that actually at Teo Delft there is a large range of online courses on various topics but definitely also including the topic of today's webinar and of this uh the is Natal is week of Teo Delft on redesigning Delta's. Florge already I think mentioned them also but the QR code on this slide and the link here should enable you to see the collection of courses that is most directly relevant for for the topics we have been discussing here and um one specific um small program in the in this in this portfolio is actually directly on the topic we were discussing here systems and actor approaches for for water and strategy planning and that's also a program that is being offered and the next version of that program actually starts quite soon next month early February um so with that at with once again thinking everyone for their for their contributions and participation's most notably uh three and Florge um I would like to to close uh to close this this webinar so thank you all very much for joining I hope you have enjoyed it as I have and hope to see you in the future again maybe some of you in real life but others maybe uh in the physical space bye everyone" "QS Reimagine Education Award announcement – December 2023","https://www.youtube.com/watch?v=qU2JWrf9Jj4","Reimagining education means going beyond formal education to provide lifelong opportunities for personal and professional developments. The lifelong learning award recognises initiatives that create impact and provide opportunities for this to happen. Self-inversity of technology wins silver for reimagining the educational needs of healthcare professionals in low and middle income countries. Realising that good healthcare provision requires rigorous and innovative training, the medical technology portfolio equips professionals to tackle pressing healthcare challenges by providing knowledge based on the latest research and applications, ultimately helping them to save lives. Congratulations to all the winners and remember never stop innovating." "Risk Management Summer Course - Intro Video","https://www.youtube.com/watch?v=T99n5ZUlOs0","Yeah, I think it's very practical, so actionable. Of course there's a lot of traditional things like I don't understand it, but then interesting to see how to actually implement them in real new states, you're like, right? The big advantage being here and now, just the professors and lectures that are what it's supposed to, so it's a very practical course, but with a really academic topic, academics in our field, so it's fantastic. We're seeing from our field, but not only was really a lot of the really low consumption because of COVID, the low consumption and the low price, and we see almost no money. The sessions on business continuity were very particular interesting to me because I've never had an idea of how to look like in practice, and we had a lot of practical examples and we were not just the arising. The sessions on the risk management were very accessible in the sense that you do not need any prior knowledge to understand what's being discussed, but they were not professional, as well. If anything, they provided stimuli for people who wanted to dig more. I participated in the summer course, what risk management and business continuity management and I decided to participate in the course because I think it's a very great issue, and the topic is in the summer course also come back in my specialization effects, which include innovation and entrepreneurship, service security and party team management, then I come in with some finance. We reckon that we can do about 10% of the down from here without having to be safe or not worse. For me personally, I did the ISO 31,000 exam on risk management and ISO 27,000 and one exam on information security because I think they have a direct relation with the immediate care step end going to take out their equipment. So one of the things that we came up with, was that you can partly settle this coupling on IT service, etc to make it real, give it a little head size and then tell to them what's going on for example. I really recommend this summer school. It's useful, it's fun, it's good for your CV and your both will be happy to what's more can you wish for." "Citizen Participation in City Politics","https://www.youtube.com/watch?v=z565KLjipEk","Welcome. Now we'll focus on influence of citizens in co-creating policies and politics for sustainable cities. First we'll look at direct participation. Next, I discuss the participation letter to assess citizens' influence in co-creation. Finally, I will give three examples of direct participation. When you think of citizens in city politics, you may think of voting in elections and referenda, or maybe about protest and large-scale demonstrations. But citizens are engaged in many more ways. For example, they tweet to oppose to a new policy, they deliberate about the future of the city, or they start a community garden. Besides formal and organized political participation, citizens influence and co-create sustainability in many ways. Direct participation is of special relevance. This form of co-creation refers to the active and direct involvement of citizens in decision making, by actively including citizens and their voices, knowledge and ideas. City politics may become more effective and efficient, better informed and more democratic. These forms of direct citizen participation are often government-led, but companies and NGOs can also adopt the role of organizing or supporting direct citizen participation. In 1969, R&C famously developed a letter of citizen participation. She visualized how much influence citizens could have on politics and policies. At the top of the letter, citizens have real influence. They are in control, partner up with other stakeholders, or have delegated power. This is when co-creation takes place. When we move down the letter, we go from tokenism or lip service to non-participation. Note that according to R&C consultation and informing are forms of non-participation. In some cities across the world, however, these steps would be considered important to increase the influence of citizens. Let's have a look at some examples. The first example is a public hearing that can be organized by local town councils to inform citizens about the decision to let them bring in critique and experiences before a policy is issued. In the photo, you see a public hearing in Seattle about a city budget. The second example is a form of more influential participation. This is the example of participatory budgeting. It has got a famous through experiences in the city of Puerto Alegro and Bello Horizonte in Brazil. And it's now applied all over the world, for example in New York City, local people decide on where some of the government money is spent. The last example is Deliverative Poling, which is developed by James Fishkin, Professor at Stanford University. Deliverative Poling consists of three steps, a baseline opinion poll among randomly selected participants, a week of discussion and deliberation among a selection of citizens and experts. And then a final poll with similar questions as the first one. Deliverative Poling often results in significant changes in public opinion and in city politics. An example is Deliverative Poling in the Tamala metropolitan area in Ghana, West Africa. Through the Deliverative Poling process, citizen became more informed about food and water security. They also prioritized, for example, a rainwater harvesting system in schools as an intervention. These examples demonstrate that direct participation in city politics is an important and widespread form of co-creation. The promises of direct participation are many, more engaged citizens, empowered citizens and better informed citizens. Better informed policies, have more effective policies, and ultimately more legitimate and democratic city politics. While recognizing the potential benefits of different forms of co-creating city politics, I conclude with a warning. If citizen participation is not taken seriously by governments and other stakeholders, citizens might lose trust in their city government. When co-creation fails, this affects democratic legitimacy and may harm future decision-making." "Educating the world TU Delft Extension School for Continuing Education","https://www.youtube.com/watch?v=XaDsx110C48","The Extension School of Delft University of Technology is dedicated to developing and delivering high quality continuing education, whilst also continuing to improve the quality of campus education. We want to equip people to solve today's global challenges, and while I learn as include many from students and researchers, we especially support professionals to upskill of cells in important technological and engineering developments. Our mission is clear to make a positive impact on education, the lives of learners, and the world at large. We remain at the forefront of open and online education. Reaching hundreds of thousands of people around the world, we help them gain access to higher education in a flexible, effective and more affordable way, even translating our courses into other languages to increase accessibility. Our portfolio focuses on the themes of great relevance to society. Delivering them at a variety of formats, and through a growing number of short programs. What we do we do for others, and we are proud of our success, of what the numbers signify. And of the global recognition we receive. But what we are most happy about is the impact we make, not only on individual's lives, but also in terms of the numerous institutions that freely reuse our open licensed materials, and our educational research. Our own lecturers and campus students benefit from all the reuse of online resources, and teaching methods, and from the innovative practices and tools. Learn our evaluations and testimonials speak for themselves. Our pedagogical model is key to our success, ensuring that learners' needs are the center of the learning experience. It is adaptive, research-based, tested, and effective. And we make it publicly available. We share and grow together with our external partners who are as important to us as internal ones. Networking and collaboration with universities and industry help us give learners knowledge that is highly applicable. Our lifelong learning and contributing to education are gaining even more prominence at national and international level. And we believe this opens up even more opportunities to share experiences and build collaborations. We look forward to keep developing online education, for and of the future. Let's talk." "Constructing a Game Tree – Water Strategy and Planning","https://www.youtube.com/watch?v=zG5QQTj6iX0","Welcome to this clip, a non-cooperative game theory. My name is Leon Hermons and in this clip, I will show you how to construct and analyze the game tree. In this clip, we will use the example of the damn just upstream of a beautiful asjurry and coastal village whereby the damarezer of war serves a nearby city but negatively affects river flows to the downstream asjurry and increases flood risks for the coastal village. The same case as we have seen in the previous clip in this series. We follow the steps as outlined in the textbook in chapter 6 called appraising the strategic value of information extensive games. The first steps in this process are well supported by making an option stable and conflict graph as shown in the previous clip. For this case, we develop the option stable shown here on the slide. The next step is to review the order of play, which player is first to decide and announce its move. Sometimes this is clear, in other cases you may want to explore different versions of the game to see if the order of play matters for the outcome. We start with assuming that the city of Mosulbe moves first and that the department of water affairs moves next and the village of Great Brak then responds as last player in the game. You may also recall that we had looked at the values and preferences of players for the possible outcomes of the game. We compared for each pair of outcomes which one would be more preferred by a player. The result is a preferred order of outcomes with the most preferred outcomes first and the least preferred or worst outcomes as the last ones in the list. For a game tree, we then need to transfer those ordinal preferences into numeric payoff values. This can be done in various ways. One way could be to assume an equal distance among outcomes and put them on an inverse scale from 8 for most values to 1 for least valued. Do you know what values should go on the question marks here on the slide? Pause the clip and take a minute to check before you continue. Did you indeed have the same value for the question marks on the slide? That now we have all the information needed for a game tree. We now have developed a complete game tree or a game theory model in extensive form. Let's analyze our game and see if we can find the Nash equilibrium strategies for the players. In the absence of cooperation and with fully rational players, we would expect players to use this strategy. Computers can help you find it for instance with gambit software but we can also find it ourselves sometimes using backward induction. The village of Great Brock moves as third and last player. They are the group the green player in the tree. In the first branch at the top, Great Brock can move to outcome 1 or outcome 2. This gives the village a payoff of 2 in outcome 1 or a payoff of 1 in outcome 2. This is the last payoff in green listed here at the end of the tree. Logically, the village would move to outcome 1. So we can cross away outcome 2. It will not be selected by the village. We can do the same for the other three branches where the village controls the last moves to either outcome 3 or 4, outcome 5 or 6 or outcome 7 or 8. Can you see which of these outcomes will be selected and which ones we can cross away? For each of these pair of outcomes between which Great Brock can choose look at their payoffs. The last numbers in green at the end of the tree. We can cross away the outcomes with the lower payoffs. For the remaining reduced tree, we can now move to the second to last player, which is the department of water affairs. This player in the top branch chooses between outcome 1 with a payoff of 5, which is the blue payoff in the middle or outcome 4 with a payoff to water affairs of 1.5. As a rational player that seeks to maximize its benefits, it's payoffs. The department of water affairs will move to outcome 1. We can cross away outcome 4. In a similar way, outcome 5 is preferred over outcome 8 by the department of water affairs. We now reach the first player, the city of Muscle Bay. Knowing what the other players moves are and what their preferences are, Muscle Bay knows that essentially it chooses a move either to outcome 1 or to outcome 5. In outcome 1, the payoff to the city is 8. This is the first payoff in red. In outcome 5, the payoff to the city is 4.5. The city will thus move to full claim, resulting in outcome 1. And this leads us via backward induction to identify outcome 1 as the Nash equilibrium outcome for this game. The city of Muscle Bay claims its full amount of water, the department of water affairs manages the dam for environmental flows and the village of Great Brock restricts it development. We now know what outcome to expect in this noncooperative game, but is this also socially the best possible outcome. If not, we may want to consider a policy intervention to improve the expected outcome for this situation. Two social optima have been defined by Pareto and Hicks, chapter 6 of the book explains both, but we will look at the Hicks optimum here. The Hicks optimum looks at the maximal total value in the game. It simply takes the sum of the payoffs for all players. The outcome or outcomes with the maximum combined payoff is or are Hicks optimal. If we look again at our game tree, we see that for outcome 1, the combined payoff is 8 plus 5 plus 2 is 15. For outcome 2, it is 1 plus 1 point 5 plus 1 is 3 point 5, etc. If we do this for all 8 outcomes, we see that the maximum value is 18 point 5. This is the combined payoff for outcomes 5 and 7. So these are the Hicks optimum. In both outcomes, the city of Musselbe would only put down a partial claim on the water and the village of Great Brock would restrict development. In these cases, there is more total value in the game than in the Nash equilibrium outcome. If players can agree on how to redistribute this value and hence start to negotiate and cooperate, all three should be able to obtain as least as much or more than in the Nash equilibrium outcome 1. It will, however, require that especially the village of Great Brock in this case gives a convincing and trustworthy signal to the city of Musselbe that the village will not opt for unconstrained village development and that the village will distribute some of its value to Musselbe to convince the city to opt for a partial claim only. The department of water affairs may be able to mediate or enforce such a redistribution. With this, we conclude this clip. We have constructed and analyzed a game tree for a given problem. The game tree suggests that cooperation could lead to a better outcome, but will benefit from some intervention to persuade the first mover in the game. To be more certain of this policy conclusion, we would also need to analyze the game for a different order of play or even for simultaneous play. You can do this yourself for instance using gambit. Hopefully, you now have a good basis to construct and analyze your own game trees. And of course, please also consult the other course resources for further background information. Thank you for watching." "Policy Analysis Problems – Water Strategy and Planning","https://www.youtube.com/watch?v=u2X4yDTF-FM","What makes a problem a policy analysis problem? What are the key elements that help us to decide whether or not an issue is relevant for policy analysis? In this video, we will address these questions. Problem formulation is critical for policy analysis. Solving the wrong problem is one of the most well-known pitfalls, and unfortunately, it is quite common. Before you invest in any further problem analysis, you will want to do a first check. To see if you indeed can expect that your topic could benefit from policy analysis. This check is fairly easy to do once you know the key components in a policy problem. Let's look at them one by one. First, we speak of a problem when there is a gap between the desired situation and the existing situation. Or when we expect a future gap between a desired future situation and the expected future situation. For instance, 30 years ago, climate change was a problem because rising temperatures were expected to change future climate conditions, leading to numbers, undesirable consequences. Today, you could argue that climate change is a current problem with more and more extreme weather events that lead to loss of life. Economic damage and other undesirable consequences. These are relevant gaps for a policy problem. Not everyone may worry about the same gap. What, for one organization, maybe a problem, may be an acceptable situation for another. For instance, for climate change and geos like the World Wide Life Fund or Extinction Rebellion, we'll consider investments in developing oil fields a problem. They want to stop this as soon as possible. For big oil companies like Shell or Exxon, developing oil fields is not a problem. For them, stopping with these developments on the short term would be a problem, which they would argue leads to economic crisis during the transition period towards renewable energy. In short, the problem is subjective. Different stakeholders will see different desirable futures and therefore see different problems. You have to be explicit about your problem owner. Is this the ministry for environmental affairs or the ministry for industry? It will make a difference. In a policy, we formulate actions, strategies or regulations that we expect will help us to improve the situation. We do this to close, reduce or prevent a gap. For instance, to prevent damage from flooding. This means that for a policy problem, only a problem owner and a gap are not sufficient. We also need to have some influence on this gap. Some actions we could consider to solve the problem. This influence can be directly, but also indirectly. For instance, Extinction Rebellion cannot force oil companies to stop developing new oil fields. However, they can mobilize public support to influence national governments who can stop giving out permits for the development of oil fields. If the gap is not actionable at all for your problem owner, it is not a policy problem. For instance, Extinction Rebellion may have planned a demonstration on Thursday. And it turns out that for this Thursday, a heavy thunderstorm is expected. This thunderstorm is not something they can prevent or influence. It is an event, not a problem. Of course, how to respond to this event can be a problem. They can choose for instance to postpone their action or not. And this brings us to the next element of a policy problem, at least for policy analysis. We have a gap. We have actions or policies to influence this gap, but to require further analysis, we also need a dilemma. A dilemma created by the presence of different alternative actions, each of which will have partly desirable and partly undesirable consequences. There are trade-offs involved. If it is clear that the thunderstorm will be so heavy that action is no longer possible, there is only one choice for Extinction Rebellion, postpone the action. But if there is something uncertain about the severity of the storm and when postponing the action will require a lot of additional efforts, a trade of emerges, it is no longer clear if it would be better to continue with the action as planned, or if it would be better to change its form or to postpone it. Finally, policy is all about coordination or collective action. This can be coordination in a private company between different corporate units or staff members or coordination in the public domain between multiple actors. This means that besides your problem owner, also other stakeholders or actors will be involved in the problem. And please note that in this course we use actors and stakeholders as synonyms. In public policy settings, the term actor seems more common in corporate management settings, the use of stakeholder may be more prevalent. We will use both to refer to organizations, groups or individuals who play a role in a policy problem. Either because they also care about the problem because they are supporters or opponents for your problem owner or because they are needed for a successful solution, or maybe they are likely to be affected by a solution, or because they are causing the problem. So, in conclusion, your very first step as a policy analyst is to check that you are dealing with a policy analysis problem, and to do a first check against the key elements. Problem owner, gap, actions, trade-offs, causing a dilemma, and actors or stakeholders. If you have checked that those elements are there, you are ready to start. Good luck with your policy analysis." "Room for Rivers: Landscape Architecture","https://www.youtube.com/watch?v=GjIP_NRWCvg","Welcome back. With this short lecture, I'd like to introduce the discipline of design or more specifically landscape architecture. You see the outline here. All humans are depending on the use of tools ranging from a house to clothes and stoves and other household utensils. And that's why we are surrounded by artifacts. From a rather moderate amount of stuff in Malia, Mongolia, as you can see, to a stunning amount of artifacts in rich countries like Japan and the USA. Every man made thing in the world has a shape, a spatial form in which it is handmade or manufactured. Artifices are evolving, becoming better and better for their intended use. Take such a simple thing as a spoon that has been around for perhaps 10,000 years. First, crafted, made from wood with the distinct disadvantage of a taste memory. Coppiring iron, next step in evolution, can be mass manufactured, but also influenced taste. Silver, as an almost inert material, does better, but is way too expensive for the mass public. Finally, in the forties of the last century, stainless steels brings the best of boat worlds. Less expensive and no taste influence at all. Design plays a role in the later stages of this evolution of the spoon giving form in shaping shape, so to say. The iPhone and the John Deer trector both are the product of intense cooperation between technicians and designers to shape these artifacts. Mostly, the role of the designer is servant and anonymous, and mostly the designing process is let's form for low function. Sometimes, the role is more prominent like how Johnny Iph helped to shape the brand of Apple by sharp designs. He became a celebrity, even a knight, sir Jonathan Iphs, and went beyond form for low function. He was obsessed, fascinated with making things as thin as possible, like the famous wafer, thin but very filmable butterfly keyboard he designed, and removing the Macsafe power connector, the HDMI port and the SD card reader from the MacBook Air. So, design might also influence even command technique. I can see some learners asking themselves, are we in the right edX course, where is all this leading? But these were, I assure you, functional stepping stones to ask ourselves the question, whether landscapes can be designed. This Arcadian picture shows the Dutch speed landscape, waterland, just north of Amsterdam. The naked eye would take it for a nature reserve, but you are looking at the resultant of a 600 year of reclamation history starting with fast moorland like this one. Reclaming moorland was done by digging ditches and drained the box that once free from the water would oxidize and give away its minerals and other nutrients for arable farming. This period of soil fertility will last until the oxidized soil subsides reaches the water table again and needs to be drained deeper. This process was reinforced by embankments, later wind-driven drainage, and centers centuries later mechanical drainage. The west of the Netherlands lost some six meters of bulk and we pumped ourselves way below sea level. And this is what the co-production between nature and man looks like from the satellite, a layer that is not available on Google Earth is the intricate system of ditches, canals, sluices, locks, water table regulations and pumps that we need to control the water levels for agriculture and for the build environment which is wooden foundation. So this landscape is, well you could say, an artifact in a way. We need this giant practices to work, farm and live below the sea level. It is a resultant of reclamation, it is engineered but it is not designed. A co-production between nature and man. But the map refills yet another layer in the landscape. I want to draw your attention to these little pollers, reclaimed land from lakes that emerged after pit cutting. This famous 1622 map boasting the reclamation pollers of North Holland. You are recognized them in the map right under. I showed this map because it is an example that landscape architecture and civil engineering share a mutual ancestor. These pollers were designed by the first generation of surfacers in Holland. In their very practical education there was no distinction between the technical and the architectonic component. In other words, these surfacers made both technically sound and aesthetic satisfying designs. That is possible, possibly why the babes, the large poll are you see in the right top was proclaimed UNESCO World Heritage a couple of years ago. It is a fine example of a work before the fall of labor deficient where civil engineering and design were as one. In the 20th century, the organically evolved reclamation landscapes and the design planned landscapes merged in the post-war radical modernization and upscaling of the Dutch landscape to make it suitable for modern agriculture. Landscape architects employed within the state helped to shape these new production landscapes and well all in all the program was the biggest project ever undertaken in the Netherlands. Dikes and lefties were sorely in the engineering domain until strengthening them in the late 20th century spurred a wave of protest of inhabitants. In proving scour, sheepish and piping correctary sticks led to not very much higher but much broader diet profiles that proved intrusive for the existing landscape and locally was said to ruin the river landscape. In this cross section you can compare the existing dike, the existing river dike with the new broader technical profile. These protests in a way brought back together engineering and design that have been separated, sectoral, and sectoral silos for decades. Landscape architects were invited to at five strikes where the start that sought to have done everything possible to smother the protest with their smart engineering approach. You can see that in the top right. This approach consisted of comprehensive analysis of cultural history, cultural historic and ecological values of the existing situation, rate them on a national and an international scale and in the design try to spare the highest rated elements. This running with the hair and hunt with the hounds and trying to steer the enormous profile clear of cultural heritage sites and then steering the other way to spare a natural area outside the dike resulted in a hap hazard dike profile loaded with ad hoc solutions for local problems as you can see top right. The core of the advice was that Rex Watters had seemed to have forsaken their cultural task in making self-confident, beautiful dikes instead of a string of compromises. The team, the advisory team, offered a smart design toolbox with standard solutions and rules of engagement. I give you an example cultural heritage has always priority when it comes to triage because it's literally irreplaceable and with well chosen methods natural values can be reconstructed in a dinemic biome as a river flood plane. The advice itself was a toolbox of rules of how to deal with the relationship of the strength of the dike and the receiving landscape. The changes that were advised could all be done within the technical profile of the dike as you can see. A fixed format for the support shoulder and one of the other most interesting recommendation was to sculpt a tapered top. The upper part of the embankment is just a bit steeper than the lower part making the dike seem more slender. That is because you do not see the upper part of the embankment as well while driving on the dike. This modest intervention recreates the sensations, so to speak, as if you are floating over the landscape that magic feeling the original dike gave to bicyclists and bikers. The design new profile is a prime example of the cooperation between engineers and designers. I showed the potential of bringing the two silos together again. But the role of designers is not restricted to helping to shape the new infrastructure, the new dikes. If we took a closer look at the nine ways to save the in the building blocks you work with the last two weeks, you will observe that only and houncing the dikes lowering the summer bed of the river and lowering the groins can be done within the sectoral domain of the waterboard. Where they own the lents, for all the other six potential measures they have to consult all kinds of other landowners from farmers to municipalities and naturecuff. Conservationists, the waterboard has to come out of their comfort zone and enter the field of spatial planning and there the design can play a mediating role. To give that process effector a compass, spatial quality was introduced as the second main goal after water safety of the whole program. This introduces a possible tension between the scientific approach of the engineers and the subjective world of quality. As an interface between the matter of fact world and the matters of concern worlds of the designers a definition or at least a description is necessary of spatial or lentscape quality with projects of this scale. With a wink to interviews, the Roman architect and engineer is again before the fall of labor division who in his 10 books on architecture defines the quality of a building as the elegant balance between usefulness, solidity and beauty. This definition of landscape quality of the project was pragmatic defined as the elegant combination of hydraulic efficiency how to achieve the desired hydraulic effect with minimal means ecological robustness how the river dynamics can support habitat quality and aesthetic meaning how to make a meaningful contribution to the river landscape while making use of a design item characteristic of the river. The aerial photo by the way shows the north-wide quite close to Rotterdam and the biggest of the 32 projects in the Dutch room for the river program. And this is the integral plan of the north-wide bringing together hydraulic ecology agriculture heritage and leisure aspects. The influx opening in a according to plan lower dioxidement in the north of the polar makes the north-wide a powerful instrument in lowering the mean high water table in the area. The bottom row shows how the polar reacts on different water tables from completely dry to completely flooded the agricultural cores light yellow and the map are always dry and usable except for a 1 to 100 years event. And on the right in the gray boxes you see how the designer tuned the landscape to the new direction of the the outflowing water in case of high discharges. Plans like this are not made in a vacuum but in close contact with specialists from various disciplines. The designer and the project manager used the design phase for participations and co-creation. The design map itself is always a conversation piece. This is the planning of a few of the Municherland project here making a new dive to make to meet the the plant high water in a addition is combined with nature development and the accessibility of an old castle slot Louis Vestine that played an important role in touch history and is now a popular heritage site. These are combined design played a considerable role in the whole program and facilitate timely completion of the program by broader public support and maybe counterintuitively design also helped to stay within the budget because of the sober definition of landscape quality and that stopped extravagance is short. Like every week we end this introduction with future trends in the discipline at hand that is of course this week designing. I want to highlight just one trend that is important in the context of our MOOC. The emerging role design plays as an instrument for future research. While prognosis and modeling may try to predict the future design has a freedom to speculate on the future as you can see here and take in the aspect of the free will and here you see an animated floor projection that shows how with Chinese building speeds using the North Sea as a term card the North Sea countries by 2050 can harvest 90% of their electricity demand from offshore wind. The animation was shown to the European ministers of energy during the Dutch Presidency and helped forging a new corporation deal between the North Sea countries. So speculations like this research by the design like this can help to shape reality to make it well in a accomplished fiction so to say. The ancestor of the room for the river program was the product of research by design too. The winning entry of a competition of future fissions for our river landscape was won by plan OFR in 1985. The plan speculated on how both ecology and agricultural economy can be boosted in the river area. This typical panorama of the Dutch landscape, the ecological component of the plan pivoted around buying out farmers that in the past constructed these little summer dikes to protect the continuity in grazing even with most high waters. By removing these lefis allowing the river dynamics and free space on a larger area and for longer periods of time. Erosion sedimentation and germinating of biota and everything not being grazed away by intensive agricultural use would result in the return of river forest in Holland. That was the promise of the plan. The first summer lefis was opened in 1989 and this is the same terrain after 30 years and after year-round grazing was introduced. OFR translated into English as stock kept its promise. The icon of the plan, the black stock, the forest stock returned a couple of years ago after three centuries of absence. One of the tools was compensating the increased resistance to the water flow from the new forest by strategic excavation giving out permits for pebble clay and sand companies. This active way to treat and mold the river bed has been very successful by now some 8,000 hectares of this toolbox plan has been executed. This research by design plan by a multidisciplinary team of designers, ecologists, river historians and river managers has been the incubator of the room for the river philosophy. Thank you for watching." "Room for Rivers: Coordination and Planning","https://www.youtube.com/watch?v=LOz--l4Usls","So, now we know something about the organizations that play a role in Dutch Watson Management. Let us look at the practicalities too. How do all these organizations with their specific responsibilities, coordinate and interact? In this coordination, the preparation of policy documents plays a central role. All organizations on different levels, with specific responsibilities, prepare their own guiding policy documents. However, while preparing these documents, most of the other organizations involved in what the management contributes and the content of their policy documents are taken into account as well. For example, when a waterboard writes a water management plan, they will take the content of the national and provincial plans into account. In this way, the preparation of these reports becomes a platform for continuous interaction. These involves the involvement of multiple government levels and vertical coordination between them is something that is called multi-level governance in the scientific literature. This way of working is to assert an extent, also prescribed by national law in the Netherlands. For spatial planning, a similar way of working exists. National government takes responsibility for planning of highways and realways, boards and airports, energy and housing. Profit system further detail these plans and are responsible for part of the implementation. Municipalities do the same, but have the specific task of license land use changes. The rule of waterboards in spatial planning is still defined as facilitating, indicating that it's there responsibility to make the plan land use functions possible. This rule is however more and more under discussion. As more and more professionals in water management and spatial planning, believe that water management needs to be more guiding in spatial planning rather than just facilitating. Also in spatial planning, a rich diversity of planning documents constitutes the basis for coordination through interaction between national government, provinces, municipality and increasingly also waterboards. The most relevant national level are the so-called spatial planning core decisions. These core decisions are important for the entire country and because they are based on decisions by the parliament have a strong guiding status. These core decisions not only exist for spatial planning of the entire country, but also for specific domains of national relevance, like the skip-al airport region and room for the river. The spatial planning core decisions room for the river, like many other large decisions in the Netherlands, has an interesting history that shows that such large decisions do not come out of the blue, but they often result from a structural change in the way society thinks about problems and how to solve them. Such decisions do not happen overnight of course. The prelude to the spatial planning core decision room for the river took at least 20 years. Also a headless to do with the very structured and formal institutional setting of water policy and spatial planning in the Netherlands. First the so-called plom or euphar or planstorque of 1987 described the future of the Dutch rivers from an ecological perspective, taking the return of the blackstorque, a bird that disappeared from the Dutch River landscape as an iconic goal for an ambitious plan. This plan was written on personal title by a group of involved and motivating experts. Following this plan a series of reports were published by the World Wildlife Fund, an independent advisory commission to the National Government, a study by Rekswaterstadt, and the National Board of Management Policy Plan that marked the shift in thinking about rivers towards a more ecological perspective. A more dynamic river landscape in which rivers would be allowed more space for the natural dynamics. The near disaster of the 1990s was the direct trigger for the government to launch the room for the river. Already in 1996 the government issued two important documents. First the Delta Plan wrote a river, large rivers, to accelerate dike re-inforcedments. Second the ministries of transport and water management and of housing, spatial planning and environment presented a policy document in which they outlined the main points for room of the river. Hey wait what happened here? We have a country that's prone to flooding where engineers dominated decision making about water management and that was flooded only one year before. What happened? How did they manage to make this shift so fast? Because only one year after the last flooding of 1995, that indeed sounds like very rapid for decision making. However as we saw in the previous slide there was a long built up to this moment. Also these rare extreme water levels in the rivers in the 1990s were actually expected to become more common in the future due to climate change, causing urgency. However it shows that events can act as a trigger to temporarily accelerate decision making. The main objective specified in this 1996 document was more room for the river. Durable protection of humans and animals against flooding. By the way here they don't mean your favorite pet they mean cattle. And third to limit material damage as much as possible. Only then the Dutch water management and spatial planning in institutional machinery started the formal procedure to come to a spatial planning court decision for room for the river. Starting the procedure, thus resulted from a remarkable and long-term process of change in the way the Dutch thought about their rivers. This was however not the only remarkable part of room for the river. Because as we will see in the next slide, also the spatial planning court decision room for the river itself showed again some remarkable innovations and discontinuitations of traditional practice in Dutch water governance. This spatial planning court decision room for the river can be summarized in a room for the river part and the room for governance part. The room for the river parts really formalizes the shift from dietary enforcement to all of the spatial strategies to flood safety. In addition, landscape quality is introduced as a second main objective of the program besides flood safety. Which as we saw in the earlier weeks of discourse means lowering water levels. And besides these objectives, the document was very strict in formulating boundary conditions on the realization of legal flood safety standards and task setting budgets, maximum discharge capacity and a deadline for its completion in 2015. Then the governance part, it proposed a programmatic approach, a leading role for regional governments when appropriate of course, complementary spatial reservation to make it possible to implement measures outside the dikes. A basic package of measures with the possibility for exchange, for example by including new ones or abandoning others. This basic package consists of the projects that you see on this map. The main objective of the program flood safety at a maximum expected discharge of 16,000 cubic meters per second and thus realizing the legal safety standards was to be realized with implementing all these projects. The second main objective, improving the quality of the river being landscape, needs to be realized within the projects and thus is part of the programmatic approach. How this worked out in practice, we will see in week four of this course. But first this week, two aspects really stand out as deviations from historic practice, the combination of measures inside and outside the dikes and a leading role for regional governments when appropriate. Historically, Riggs Vartigstad has always been responsible for what to manage meant and river engineering. But this year's diction was confined by the dikes. In the room for the river program, the river engineering toolbox was expanded to measure outside the dikes and regional governments could play a leading role in project realization. This actually marked the end of Riggs Vartigstad's Hegemonie. It was experienced as a real blow for the powerful position of this organization. So in conclusion, room for the river resulted from a shift in thinking about rivers within Dutch society and government, but it's also ignited innovations in governance. Especially the governance innovations are an hindsight regarded as determining for the success of the program. These innovations emerged from a firm structure and long history of cooperation in Dutch water management and spatial planning. But there are also a breakpoint from an historic perspective. This might be the most relevant lesson when developing a similar program elsewhere. New practices in river management need to be accompanied by fitting innovations in their governance system in order to be successful. This is the real art and science of building a successful room for the river program. Thank you." "Room for Rivers: Stakeholder Context","https://www.youtube.com/watch?v=Fwl-5Q_y62w","Welcome back. In the first two weeks of this course, we used the knowledge of reformer phologist and river engineers to better understand what river to want and do and how they are impacted by engineering interventions. This week we focus on governance and decision making. With governance we mean the process for which decisions are taken, which includes the whole set of political, institutional and administrative rules and practices. The photo on the left is taken at the festive opening of one of the room of the river projects, with in the middle the lady and black, the minister of infrastructure and water management, who was responsible for the program and financing it. On the schedule on the right, you see that the governance of a program like room for the river in the Netherlands can also be very complicated. Each colored block represents an organizational entity, on regional and national levels, involved in the formal process of governing the Dutch room for the river program. In this presentation represents the way decision making on the room for the river program has unfolded in the Netherlands. This is far from a blueprint for other rivers and countries. Every country and river needs specifically designed and fitting arrangement. We therefore just take you through the context and the tradition in Dutch water governance and the steps taken in decision making for the Dutch room for the river cases. Or core message of today, use the governance tradition and institutions of your own country and let time do its work. To understand the decision making process that took place, we need to look back in time. Room for the river marks a shift from engineering approaches to river management to the room for the river approach. To understand this, we need to start with a brief history of the Dutch water and then look at the changes in thinking that took place. We then continue with the institutional setting of water management and spatial planning in the Netherlands. Next, we focus on the formal process of policy development and decision making until the so-called Plano-Logicacern-Buslissing, the planning, core decision that represents the formal decision by the Minister and Government. This is a crucial decision that gives green light to the start of the room for the river program and it also makes the financial resources for its implementation available. We then conclude with discussing some of the remarkable aspects of this government arrangement. Aspects that are new, innovative for the Dutch context and that emerged in specific context of the room for the river program. Let's start with a bit of mid-busting. Maybe you have heard stories that the whole of the Netherlands is located below sea wealth level. Well, it's not that extreme to be honest. But it's true that half of the country is prone to flooding, either from the sea or from the major rivers. The Dutch room for the river program focuses on the one-third of the Netherlands that is at risk of flooding from rivers alone, so not from the sea. And another story about a Dutch and water states that God created the earth, but a Dutch made the Netherlands. This is actually about reclaiming land from the sea. Land reclamation started already in the 17th century, but it really took off in 1700. Spurred by what the safety issues and private investors that earned their money in the trade with the countries that later were occupied as Dutch colonies. You see these in purple on the map to the right. After 1900, reclamation became a governmental program, resulting in the reclamation of the zilders' apolars, the southern sea polars. These you see in yellow on the map. The foamy in the end of the second world war was a major incentive of this governmental reclamation programs, no more hunger. Which was to be achieved by more agricultural production. So life as we know it in the Netherlands would be impossible without flood protection, like dikes, storm search barriers and other measures. There actually are about 18,000 kilometers of sea dikes, dunes and river dikes. That's a lot. To compare the total length of highways in the country is only 5,500 kilometers. And all these dikes need proper governance and management. The story of the Dutch and water might look like a success story, but it's actually a story of learning by doing and ups and downs. The dunes are a history of smaller and bigger flood disasters. They have been formative for the Dutch culture and tradition in water governance. Are history of floods with the biggest in 1421, 1960 and 1953, and numerous smaller disasters in between. In the 1990s, river floodings, a disaster was prevented, but it was a close call. This near flood disaster of 1995 was one of the major triggers for the room for the river program. This event actually coincided with changing thinking, a changing perspective on the ecology of the river landscape and the space that rivers need. As such, this near flooding disaster actually fell into a fertile soil. The relation with water and the history of learning by doing and flood disasters shapes the organization in institutional embedding of water governance in the Dutch governmental system. The deeply embedded solidarity when it comes to flood risk is still alive. And it's also played this role in the room for the river program. For example, citizens in the village of Lent, that were forced to sell and leave their houses for the Nijmegen room for the river project stated that this was difficult for them. But that they felt to have contributed to flood safety for all. The accompanying tradition of Lentraclamation required large areas of New Lent to be planned and transformed into productive landscapes and in addition, induced the development of a strong spatial planning. So what does this institutional setting look like and who is doing what? Well, the most characteristic of the Dutch water management institutions are the water board, independent public institutions with the history of centuries and some independent income of texts revenues from the Lent owners and residents in their region. Note that this does include all the private house owners as well. The water boards I'm embedded in a complex division of tasks between the National Government and Local Governments with respect to water management. Rigswatersat, an organization similar to the departments of public works in many other countries, acts as the managing organization for channels and rivers on a national level. Similar to the task of water boards on the regional level. For the room of the river program, the specific division between tasks and responsibilities between the water boards and Rigswatersat is extremely relevant. Rigswatersat is responsible for the river, the flood plane between the dikes, navigation, water quality, drenching etc. While the water boards are responsible for diet maintenance and construction. However, when major investments in dikes are required, for example, because of national adjustment of flood safety standards, water boards will demand additional financial resources from the National Government. You can imagine that this complicated division of responsibilities complicates a room for the river program, where investments do not only concern dikes, but also spatial measures that give more room to the river. The task of municipalities who are responsible for land use planning further complicates things. Now for the rest of this story, please keep in mind that there are two ways to deal with such complexity. First and similarly attractive, way is to try to simplify things, while another way is to embrace it and just deal with it. In between the National Government and the Water Boards and municipalities, is the province playing a coordinating role." "Room for Rivers: The Building Blocks","https://www.youtube.com/watch?v=B3r-CP8xjtw","I am Martina Rutter, associate professor water management and climate adaptation at TUDELF. This week we will discuss river engineering and specifically the building blocks that were introduced in the Netherlands after the floods in the 1990s to increase water safety and reduced flood risk while also increasing landscape values. This week we will focus on the technical and engineering aspects. The landscape design aspects will be discussed in week 4. Within the room for the river program at the beginning of this century several building blocks were introduced to increase the capacity of river system and these building blocks are depicted in this image. These interventions were directed at reducing the water levels in the rivers in the Netherlands, including the valve, the case study of this week and reducing the water levels was basically done by increasing the discharge and the storage capacity of the river systems. Note that this set of building blocks was designed for the heavily engineered, canolized river systems in the Netherlands. For other river systems this set may look very different and we will get back to that at the end of this lecture. Let's look first at a few equations that help understand the effectiveness of these measures. The first equation states that the discharge capacity equals the flow velocity times the wet surface area. So basically if the cross-section is increased and the flow velocity stays the same the discharge capacity increases and the water course can transport more water. From this equation you may get the impression that this is linear but this is not always the case as we will discuss later. Most building blocks were directed at the increasing the wet surface area as the name room for the river already implies. An example is a lowering of groins. These structures were intended to prevent the natural meandering of the rivers discussed in week one. A continuous changing water course is difficult to navigate and in addition these groins help deepen the main channel also beneficial for navigation and protect the embankments form creeping eyes, debris and wave attacks. By lowering the groins it was aimed for the groins to keep these functions but allow for more water discharge during high flows. The discussion on design of river training such as groins still continue. For example the deepening or erosion of the main channel is going so fast that it's threatening infrastructure ecology and navigation. Grimes parallel instead of perpendicular to the river banks are on the investigation as an alternative but also more drastically changes to river management as I being discussed. You can imagine how deepening of the river beds, dark relocation, high water channels, lowering of floodplains etc would increase the wet cross-sectional area and could thereby increase the discharge capacity. The second component in the discharge equation is flow velocity. What does this flow velocity depend on? A simple calculation to calculate flow velocity and water courses was developed by the Irish engineer Robert Munning in the 19th century. Basically the flow velocity depends on the slope, the steeper the water course, the faster the water flows, the roughness, for example bedfacitation slows down the flow and lasts the hydraulic radius. The hydraulic radius is basically a measure for how close a water course is to a smooth half-round pipe. The most efficient shape to convey water. For the white rivers considered in this MOOC you may take river depth as a proxy for hydraulic radius. Of course the flow velocity better than in a river is much more complex with for example stagnant zones in an abente and just downstream of groins and flow acceleration in other zones. These flow patterns are very important for water and sediment transport and are such crucial to understand the river engineering. Yet this equation is useful for first order calculations and understanding. For example, flow flow steep of vegetated rough floodplains, maybe low if not maintained. In addition floodplains provide additional storage capacity in the river system. The tension reservoirs do this as well. Storage capacity can help to dampen the flood wave and let it safely pass. Yet how effective are we tension areas for extreme discharge events? For the typical flood waves in river such as the rain river, a lot of storage is needed. Let's have a look at this hypothetical example with a flood wave of 80,000 cubic meters second in a river with a capacity of 16,000 cubic meters second with a duration of one day. These figures are roughly based on the rain river. We can roughly estimate the storage needed by calculating the surface area of the hydrograph above the capacity line. If we approximate this with a rectangle of 1,000 cubic meters second high and 24 hours, time 60 minutes, time 60 seconds wide, we arrive at the volume of about 86 million meter cube. This equals in the order of 6,000 sock of fields with a flood depth of 2 meters. So water retention requires a lot of space but can still be desired, especially as droughts are also becoming more frequent in the rain basins and many other parts of the world. The last measure we will have a detailed look at is the removal of obstacles. This helps to increase wet surface area and flow flows to locally and thereby the discharge capacity. Yet it's a fact can propagate upstream. This is due what is called the backwater curve. Basically structures like bridges act as bottlenecks in a river system and this results in higher water levels upstream of the structure. This backwater effect may be aggravated by for example debris that is trapped at bridges and other obstacles. The set of building blocks developed in the room for the river project provides solutions for the heavy engineered water system in the Netherlands. In other river systems the solution space may look different and may be larger or smaller. What building blocks are possible when using natural processes such as sediment transport, meandering or biological processes? What would the set look like if you take more perspectives than floods into account? For example droughts, water quality. Can you develop a set of building blocks to add to your river biography?" "Lifelong Learning portfolio: Sustainable Healthcare – saving lives every day, everywhere.","https://www.youtube.com/watch?v=zuIO9fwHoXw","For health professionals to meet evolving needs and challenges that are a diverse as the communities that they serve, staying ahead is not a choice but a necessity. They require cross-disciplinary, rigorous, innovative and accessible training to address current and future demands in a sustainable manner. Our lifelong learning portfolio helps them to create and work in healthcare systems that deliver efficient and high-quality care, while minimizing the impact on the environment. Ultimately helping them save lives every day, everywhere. Our portfolio is specifically designed for professionals such as doctors, educators, hospital staff, medical specialists and engineers. We don't do this alone. We work side by side with partners in hospitals and in the metac industry to co-develop courses and ensure they reflect and respond to the latest needs. Academic knowledge does cause hand-in-hand with the latest research and technology called fances. Professionals gain tools and skills directly applicable to the work to address the daily challenges and adopt innovative solutions. He provides knowledge and tools that are globally accessible and locally relevant. Or courses are available online increasing worldwide access for thousands of professionals. The growing field of sustainable healthcare needs ever more people with the right skills to use new approaches and new merit technologies. And these need to be applicable in the local context. We also tailor our courses to the particularly requirements. This is especially significant for the countries where the need is high and resources are limited. It's not just theory. It's about creating real solutions. For instance, we help healthcare organizations reduce waste in their supply chain and apply circular principles. We focus on better care for patients by mapping their experiences and perspectives and by developing methods of analysis that are faster and enable better interventions. This is because we are committed to making a positive impact on our environment and on the sustainable provision of healthcare for all everywhere." "Circular Strategies for Hospitals: Some Examples","https://www.youtube.com/watch?v=WAUilTk12Uo","This week we will explore applying circular strategies in a hospital organization. And we have discussed the butterfly diagram. You can use this diagram with the circular approaches to seek and explore your own sustainable processes. Like reuse, maintain, repair, refer base, re-manufacture, or recycle. All different kinds of approaches which could help you to improve sustainability in your hospital. We will explore the natural and natural and natural processes. We will explore the natural and natural processes. We will explore the natural and natural processes. We will explore the natural and natural processes. This kind of stables actually are used often in the nettles. Around 250,000 procedures require a little container or even more. You can use only ones despite all the complex components in the unit. Imagine a amount of waste that generates. In this example, you see a complex disposable container. In order to make this more sustainable, you can use different strategies. For example, you can reuse the complex motors with gearbox or you can actually reuse all the stainless steel transmission components, which don't need any extra modification for that. And always the parts that are left for example the covers and the parts that you break down because you cannot so-free disassemble it. You can use them to make random. You can use them to make them more sustainable. You can use them to make them more sustainable. You can use them to make them more sustainable. You can use them to make them more sustainable. You can use them to make them more sustainable. You can use them to make them more sustainable. You can use them to make them more sustainable. You can use them to make them more sustainable. You can use them to make them more sustainable. You can use them to make them more sustainable. Thank you." "Sustainable Building with Timber - Inspiration: Hotel Jakarta","https://www.youtube.com/watch?v=dz5OakTTy0k","Today we are in Amsterdam. And we are going to show you a truly remarkable building. Let's meet Joka, who is a problem with had a personal clarity at the AMS Institute. Hi, Joka. Hi, Ian. Great to be here. This building really looks interesting to me. Although you wouldn't say it's entirely made out of timber, right? It is indeed. And timber manufacturing actually has gone a really long way and the past few years, so much so that told building such as this one can be constructed almost entirely out of timber. Wow, it intrigues me. Let's have a look. Let's. It's a beautiful space, really. You know, it's a mystical section that's with timber you cannot make grants more than architecture. But this actually, I'm just really convincing. It's showing that you can actually. It's incredible indeed. It's good to know that we're currently in hotel Jakarta referring to the capital of Indonesia, and that's because historically this hotel is built along the docks where the ship's due Jakarta is to live. And we can see that translated into the bamboo finishings, the tropical garden, and the big beams spanning the Adriene. I feel like we're really experiencing a celebration to the namesake of the hotel. Yes, it is. It's so convincing. Absolutely. Shall we go take a closer look upstairs? Great idea. Let's do it. These bamboo finishings, they have a beautiful tactile quality to them as well. They do really. It's so soft, really nice. But you know, there's so much more to bamboo and to timber in general than just the aesthetics. Imagine that's 35% of all global greenhouse gas emissions are related to the other materials, to the anaorganic or the non-renewable materials, like concrete, like cement, plastics, metals. And this is really what you can substitute by using timber. Using timber will result in healthier buildings, which store carbon instead of emitting it. While the raw resource, the trees, grows back and absorbs even more carbon from the atmosphere. And it's something more. There's so much more to this than just climate-related aspects. I mean, this is also related to health and to resource scarcity. And look at this roof. There's really integrated PV cells in the glass ceiling, and it is breathable. So it's really an integrated concept. Timber construction is crucial in reducing global greenhouse gas emissions, and by extension achieving the sustainable development goals, and by lowering the Paris agreement, we have to be mindful, though. We can't just use any kind of timber. It needs to be the right timber. And the right timber, what I mean with that is that it needs to be sustainably sourced. There is indeed enough availability of that type of sustainably sourced software in the world. But really what I want to stress here is the importance of timber construction, going hand in hand with sustainable forestry. Yeah, I think sustainable forest management allows us to grow our forests and to make our planet more green. And at the same time to capture carbon, so I make it look carbon-positive. While at the same time having sufficient resources for buildings like this. Absolutely. And that is why we can get massive carbon gains. Did you know that replacing just one ton of concrete and steel with one ton of sustainably sourced software, we can save up as much as one and a half tons of carbon emissions. Imagine that on the scale of a building in neighborhood, or even an entire city. By the way, Arion, next to these environmental considerations, I'd like to show you one more thing of that's okay. Here we go, have a look. I'm curious what you're going to show. Let me go. Wow, look at this. We are now in one of the 200 hotel rooms of a hotel chair-cartoons. And every single one of these hotel rooms have been pre-fabricated. And what this is done to the construction process is actually it's fitted up incredibly. Constructing all of these 200 rooms here on-site only took two weeks. Two weeks incredible. I mean, look at that. Actually, the speeding up, let's say, the construction time of a building like this, in this case, the hotel's a-carta. It really helps make timber, visible, alternative for a biotic for conventional building materials. And actually it even introduces higher quality standards like we see here. Look at the light, look at the finishing, the quality, the high quality, but also the silence. It makes me a bit sad. I mean, the room is really gives it. It's stunning. Yeah. And what's more, with these three fabrication methods of manufacturing, of constructing, it actually allows us to introduce circular building practices, meaning that we can start reusing bits and pieces of the building after our needs start changing, or when our needs start changing. This can happen on the product level, the component level, or even the entire building level. Wow. So this is a beautiful room, with a beautiful view. Let's see. Lovely. Have a look. Well, Arjen, I think, have convinced, I think we've seen the future of a post-carbon construction world. Yes, I think so too. But not just a future, actually, it's the present. Because here we are. We have been seeing this building, which has been built, it's such a speedy time, but this is the present. And this is something which will help change our world. And what is the use of this building? And what is the use of this building?" "Systems Analysis for Problem Structuring part 2 the multi actor perspective","https://www.youtube.com/watch?v=3hloMo7OJdM","Hello, my name is Will Tisson. Welcome back to the second part of this tutorial on using systems analysis for problem structuring. In this video, I will briefly recapitulate the starting point, explain how to apply and use systems analysis in the multi-actor situation, and illustrate this using the wind power example, as in the first videos. In the first video, I have explained how systems analysis can help you structure a problem from the perspective of a single actor. You start from the criteria, and then use causal analysis to identify the system factors, means, and external factors. You iterate and check for consistency and analyze the results using system diagram as well as a scorecard. But when multiple actors are involved, limiting your analysis to the mono actor perspective is insufficient. Some actors might be affected by actions considered by the problem owner, and therefore oppose his or her plans. Other actors possess means that are necessary for reaching the problem owner's goals. These other actors may have entirely different goals and problem perceptions. You will need to explore the perceptions of these other actors, determine their interests and perceptions, and find out whether these other actors may help the problem owner reach his goals or not. Why? Systems analysis can help you identify the other actors, represent their problem perception, analyze dependencies, and identify strategies for further action and research. I suggest the following general steps. First, start from the mono actor system diagram and explore what relevant factors may be influenced by other actors and whose these other actors are. Second, identify what other actors may be affected by changes in system factors. These two steps provide a starting point for the third step, a more extensive actor analysis. The actor analysis helps you determine who the critical actors are, those that the problem owner cannot ignore. Please watch a separate tutorial on actor analysis to learn much more about this. And the fourth step perform a systems analysis for each of the critical actors. Focus your attention on those means and objectives of the critical actors that may interfere with or otherwise are relevant to your problem owner. And then extend your original mono actor system diagram by including the relevant criteria and means of the critical actors. And again, as before, after each modification of your systems analysis, you should iterate and check for consistency. While you develop and complete the extended system analysis, look for relevant insights and conclusions. Do other actors have goals in common with your problem owner? Are there any direct value conflicts? We have a value conflict, is one actor wants exactly the opposite of another actor. For example, after a dry spell, farmers want rain while tourists will continue to prefer dry and sunny weather. Also, analyze what I call cross impacts. The impacts of the preferred actions of one actor on the criteria value by another actor. If these impacts are valued positively, the two actors are potential allies. If the impacts are valued negatively, however, the actors have opposing interests and alternative ways may be needed to prevent opposition. Such insight help identify the potential for arrangements between actors. And finally, in much the same way as in the mono actor case, the systems and analysis helps you identify knowledge gaps that can guide the direction of further research. Let us now look at our wind power example again. Our problem owner is the department of energy. It wants to enlarge the percentage of offshore power generation while not endangering security of supply and while keeping power costs at acceptable levels. Remember the system diagram for the mono actor perspective explained in the first tutorial on systems and analysis. Using this diagram, our first question is, what factors may be influenced by other actors and who are these actors? Let's start at the right hand side of the diagram. We first note that tenet, the distribution network company, can build new international connections. Tenet also manages the operational balance of power supply and demand. The European Union stimulates the international connectivity of the power networks. Our and the companies explore and develop new technologies and these may lead to influential breakthroughs in available storage capacity as is also concluded in the tutorial on exploring the future. Energy companies may invest in wind farms and thereby influence the number and size. The Ministry of Infrastructure and Environment is a competent authority for management of the North Sea. It has an important say over where the construction of wind farms is permitted. After the identification of actors who may influence the system, we now turn to looking for actors that may be influenced by changes in the system. Again, going from right to left, the interests of tenet will be affected by changes in the supply demand balance of power on the network. Other uses of the North Sea, such as shipping and oil companies may feel the installation of new wind farms at sea will interfere with or limit their own business activities. Investors will have an interest in the availability of subsidies as well energy companies and R&D companies. Tenet's business will be affected by energy transport costs. Energy companies will be affected by the costs of energy provision. Finally, the Ministry of Infrastructure and Environment is concerned about an efficient and sustainable use of the space at sea and this may be affected by installing wind farms. This example shows how a system diagram can assist you in identifying who the relevant other actors may be. That provides one of the stepping stones for more extensive actor analysis. The actor network analysis helps you identify what actors the problem owner cannot ignore, the so-called critical actors. If you have watched the tutorial on actor analysis, you may remember the following table. The actors with high interest and high power are listed in the top right hand corner. The actor analysis suggests those are the five actors that should be taken along in the analysis. I will now illustrate the next steps in extending the mono actor systems analysis to the multi-actor situation, but limit myself to just two of the critical actors. The Ministry of Infrastructure and Environment and the Energy companies. Let's first get back again to the mono actor diagram for the perspective of the Department of Energy. Extending it in this form with criteria, means, and additional factors for the other actors would make the graphics too crowded, so I decided to simplify the original diagram a bit. I aggregated some of the factors, like the number and size of the wind farms. I also left out some of the intermediate factors, and the less essential effects of scale advantages. The essential relations remain, however. To emphasize the effects of location choice on capacity, I added the factor average wind speed at location. I also indicated that the Department of Energy is the owner of the criteria and of the means by adding the letters DE to those factors and means respectively. Let's start with the Ministry of Infrastructure and Environment as additional actor. They are responsible for fair, efficient and sustainable use of the North Sea and for safety at sea. And have to deal with the variety of uses who compete for space. For example, wind farms, shipping, oil exploration and nature presentation. Clearly, not all of these uses fit at one and the same place, and safety may be endangered. We now return to the system diagram and we'll use a blue coral for the Ministry of Infrastructure and Environment and add there two relevant prime criteria for the North Sea to the diagram, efficiency of space use and safety at sea. Installing wind farms may contribute to the efficiency of use, but less space will be remaining for other uses, possibly also affecting safety at sea. It appears the Ministry of Infrastructure and Environment is a prime responsible agency for assigning locations for wind energy at sea and with therefore modify the original diagram by substituting the original means distant to coast, by the means assigned priority locations near coast and assign it to the Ministry. We also learn that the licensing process is the joint responsibility of the Department of Energy and the Ministry of Infrastructure and Environment and add white and blue shading to it. To keep the Stinctions clear, we use the initials i and e to indicate the criteria and means of the Ministry of Infrastructure and Environment. Based on this diagram, we can now construct the following scorecard. To the original scorecard for the mono-actor perspective, the two criteria of the Ministry of i and e are added. A green color indicates that the impact on a criteria and is considered desirable, a red color indicates that the impact is undesirable and gray indicates that the impact seems to be neutral or is unknown. For example, if you go back to the diagram, you will see that there are both positive as well as negative impacts in the causal chain between the various means on the one hand and efficiency of space use on the other. Therefore, the impacts on efficiency of space use are labeled as uncertain. The impacts of installing wind farms on safety at sea are negative according to the system diagram, but I note that the extent to which this is the case may strongly depend on the choice of location. What can we now learn from this analysis? First, there are no direct value conflicts. The Ministry of Infrastructure and Environment is interested in other things than the Department of Energy. However, using space for wind farms may negatively affect safety at sea. And the impacts of adding wind farms on efficiency of space use are uncertain and depend on opportunities for other uses, location and perhaps other factors. Therefore, while the interests of the Ministry of Infrastructure and Environment are not directly conflicting with those of the Department of Energy, it will not be a natural alley easer. Let us now turn to a second critical actor, the energy companies, and perform a similar system analysis. Again, we go back to the original diagram in simplified form. The energy companies have two main objectives. Security of supply and an attractive return on investments. We add return on investment as a criterion and choose an orange color to distinguish it from the criteria of the other actors. And also use the label EC to indicate that this criteria belong to the energy companies. Security of supply is a criterion for both the Department of Energy and the Energy companies, and therefore we shade it orange and grey. Working backwards from the criteria again, return on investments is determined by the revenues and costs. Reven use from wind farms in turn are determined by installed capacity and by the prices received per kilowatt hour. This signals the need for a new external factor. Market prices for electricity depend on a cost of alternative energy sources and are generally outside the control of the actors' concern. Energy companies are prime decision makers regarding the number and size of new wind farms, so we add their investment decisions as a means to the diagram. Now again, we can derive the following scorecard. Focusing on the criterion, return on investment for the energy companies, we see that the means of both the ministry of infrastructure and environment and the Department of Energy contribute positively. The eventual impact of energy company investments will however partly depend on an external factor and hence be uncertain. Investments of energy companies in offshore wind farms will have a positive effect on the percentage of offshore power and a negative effect on security of supply and on the cost of energy provision. What can we conclude from this particular scorecard? First, energy companies share some of the goals and the same dilemma as our problem owner. There is no direct conflict as both value security of supply in the same way. Second, energy companies have a strong interest in close to cost locations to keep the costs of investment and transport within bounds. Most actions of the Department of Energy will also benefit the energy companies. They are therefore potential allies for the department, but the return on an investment for the energy companies also depends on other factors. Notably, the market price is for energy which are outside the control of any of the actors considered. As the next step, we combine the analyses for the two critical actors. The following integrated diagram results. It includes all the relevant criteria and means of both the problem owner and the two critical actors in the single diagram. The coloring enables us to keep the distinction between the criteria and the means of the different actors. The corresponding scorecard now includes all the means and criteria of the three actors considered. New elements in this scorecard are the cross impacts of energy company investments on efficiency of space use and safety at sea. These are uncertain and negative respectively. So again, what overall conclusions can we draw based on this extended systems and analysis? Well, first, they do not seem to be immediate conflicts between our problem owner and the two critical actors considered. Second, support by the Ministry of Infrastructure and Environment is crucial. Energy companies will prefer near-cost locations, but these are not necessarily preferred by the Ministry. For the Ministry, such locations may be acceptable only if interference with other desirable uses of the space at sea is minimal. Third, as can also conclude it in the tutorial and exploration of the future for the same example, important external factors, notably market prices for electricity will determine the attractiveness for energy companies to invest. Fourth, concerns about security of supply remain. They may be alleviated if more international connections will be realized and if new ways of large-scale power storage would become available in the future, but this also is highly uncertain. Other actors such as tenet and European union may be of a assistance in this respect. And of course, according to the findings of the actor we should extend the analysis to include other critical actors, notably tenet and shipping companies. I conclude the discussion of the example by identifying knowledge gaps that should be investigated further in light of our analysis. As indicated above, it is important to search for attractive locations where wind farms can be built without interfering with other usage functions and without endangering safety at sea. Investment decisions by energy companies are critical and therefore further research into the factors determining their return on investment is also indicated. To what extent will subsidies work, what is the influence of location choice on benefits and costs? What risks do energy companies face in light of uncertain energy markets? Of course, further research into the security of supply and how and what costs it may be guaranteed when a larger fraction of power is generated at sea should also be on our list. I conclude this tutorial with a number of more general remarks and suggestions. First, if you have been watching the development of the example attentively, you may have noted slight modifications and consistent inconsistencies on the way. The zillestrates the fact that there is not a single best system model. You will need to adapt as you proceed and learn more and more about the problem situation. As more actors are included, complexity of the model will generate rise. For example, if we also include tenet and the shipping companies in the integrated systems and allosys, their criteria and means are added and system boundaries may shift. For complicated cases and more advanced analyses, we suggest the use of dynamic actor network analysis or data. It is a package of software that was especially designed for applying systems analysis concepts in multi-actor settings and you may find the software instructions on the blackboard page for this course. Second, and the more general level, we have developed separate tutorials on actor analysis and systems analysis. However, as we have seen, the two approaches are not independent of each other. On the one hand, systems analysis provides starting points for actor analysis. On the other, actor analysis provides insights into who the critical actors are and into their positions. In turn, systems analysis helps to sharpen the insights in the mechanisms that play and to specify the dependencies between the critical actors. It duration between systems analysis, actor analysis and exploration of the future is key to an effective approach. Together, these methods will help you in providing the building blocks for your storyline and your research plan. Thank you for your attention." "Systems Analysis for Problem Structuring part 1B the mono actor perspective example","https://www.youtube.com/watch?v=NJUKngA_B_E","Welcome to this second video on systems analysis for problem formulation and analysis. In the first video, I explained the use of systems analysis and in particular the system diagram for monoacto situation. In the second video, I will illustrate this approach using a simplified example we call Wind Power. Let's assume your client is the Dutch Ministry of Economic Affairs in particular the Department of Energy. The ministry wants to enlarge the fraction of renewable energy generation and would therefore like to increase the amount of wind energy at sea. They are also concerned about the affordability of power to industry and to the general population and about the security of power supply. So let's assume the demarcation of the problem and the analysis of objectives has led to the identification of three criteria. The security of supply, the percentage of offshore power generation and the costs of energy provision. We now reason backwards to explore what factors have an influence on these system outcomes. Really, the offshore percentage is positively influenced by the installed Wind Power capacity at sea, a key system factor. Capacity at sea in turn is determined by both the size and the number of wind farms at sea. The Ministry of Economic Affairs cannot itself invest in new wind farms but hopes to simulate investors and energy companies by providing subsidies and by expediting the granting process of licenses as needed. We therefore consider these as means of the ministry and place them on the left side of the diagram. As a result, we have identified a key pathway through which the client can influence his primary objective, the fraction of power generated offshore. But does this mean the diagram is complete? Surely no. As I explained in the first video, we need to check for completeness and consistency by iterating. As the next step we therefore ask whether any other factors can affect the clients' objectives. Clearly, the power generation capacity of a wind farm depends on the average wind speed at the sea location. This is a factor that cannot be directly influenced by the Ministry, so we consider it as an external factor and place it at the top outside the system boundary. Next we reason forwards and ask whether use of the means will have any relevant side effects or impacts on any of the other criteria. The number and size of wind farms at sea affect investment costs. And therefore we add this influence on the overall costs of energy provision. Now we reason backwards from the costs again and ask whether other factors will influence overall cost. We note that investment costs will, among other things, depend on the water depth, and the chosen location. Moreover, the choice of location will affect transport costs. The larger the distance to a land connection point, the higher the transport costs. Therefore the government may wish to influence the location choice in particular the distance to the cost. For the time being, we add influencing the distance to the cost as a means to our diagram. But note, the Ministry of Energy cannot fully by itself determine the location choice. Other ministries, notably the Ministry of Infrastructure and Environment, have strong interests and power over which locations are permissible. We complete our provisional exploration of factors affecting costs and add the influence of scale advantages. The larger the size of the wind farms, the larger the scale advantage, and this may help reduce average cost. Last but not least, we ask whether changes in the system may influence the security of supply. We should take into account that the increase of the percentage of wind power may affect security of power supply. Wind speed varies and wind may even be totally absent at times and then no wind power can be generated. As a result, the balance of power demand and supply will be negatively affected if the offshore power fraction gets large and this will negatively affect security of supply. We include these influences in our diagram, but we also observe that available power storage capacity and the number of international connections may positively affect the balance. And we add these as well as external factors as the Ministry cannot directly affect their development. We could go on and add more detail and sophistication to the diagram. But I stop here as the main purpose of this video is to illustrate what we can learn from a diagram like this. Are therefore returned to the uses of a system diagram introduced in the first video? Does our problem owner face an intrinsic dilemma? The answer is clearly yes. Following the highlighted links in the diagram, it appears that security of supply is always negatively affected if the primary objective more wind power is improved. And of course, also costs of power supply may rise as more investment in wind farms are made. Are available actions effective? Let's again look at the diagram. All the highlighted links highlighted in orange show that the problem owners means do influence all the goals. But all of them cannot be achieved at the same time. Based on the diagram, the following scorecard results. It shows that use of the available means will help increase the fraction of offshore power, but it does not specify to what extent this effect will occur. It also shows that the influence on costs of these means is not clear. Furthermore, it appears that two of the three main criteria are indeed sensitive to external influences. How big that influence could be and whether the client's goals could be attained without taking action cannot be established based on the information included in the diagram alone. Please watch the tutorial on future exploration to learn more about this aspect. Are there any important knowledge gaps? Yes, several I would say. Again, critical influences about which little is known are highlighted in this diagram. We only have a general idea that subsidies will have a positive effect on decisions by investors and energy companies who need to invest in wind farms at sea. But we don't know the magnitude of the influence. We also don't know to what extent the external factors will affect the criteria. Also the impacts of location chores on costs need to be further explored. Including the simple system analysis in this example leads to the following observations. The client, indeed, faces an influencing dilemma. He, by himself, cannot achieve both security of supply and the high fraction of wind power generation at the same time. The client has a number of means that can contribute to achieving some of his objectives, but it is not sufficiently clear to which extent these means will be effective. And this may also depend on the number of external factors. The client depends on other actors, for example, those who influence location chores, investment decisions and power network structures. To enable us to better help our client, more inside is needed in the influences and factors listed here. Please note that the example, as I have discussed this, is a very simplified one. In reality, more and other relations need to be taken into account, and the system analysis would need to be based on a more thorough study of the factors and mechanisms of determining system behavior. Discussions with the client, literature, study and consultation of expert will help in this endeavor. I have limited myself to the basic principles that will help you structure and relate the different elements of a problem analysis. I have focused on the methodological and conceptual aspects of the analysis and shown how a relatively simple analysis can help in identifying key features of a problem situation. This type of analysis provides a starting point for further research and analysis, an exploration of possible futures is needed, as well as a more systematic and thorough actor and network analysis. As I said in the beginning, the diagram I have discussed represents the perception of a single actor, the problem owner. I've only taken his or her perceptions, objectives and means into account. However, in the reality of multi-actor policy making, others may be affected by the problem owners' actions, and the problem owner depends on other actors for reaching his goals. These other actors may have different perceptions on the problem situation and different objectives that should be considered as well. I will therefore take a multi-actor perspective on the use of systems analysis in the next video. Thank you for your attention." "Systems Analysis for Problem Structuring part 1A the mono actor perspective","https://www.youtube.com/watch?v=P1shkt9Ujow","Hello, welcome to this tutorial on systems and alaces for problem formulation. My name is Will Tissen and I'm teaching policy analysis at the Faculty of Technology, Policy and Management. In this tutorial, I will explain why and how you can use systems and alaces concepts for problem formulation and analysis. There's analysis is a broad field and I will limit myself here to a few basic concepts in particular the system or problem diagram. I will explain the basic notions in the first video and present a simple example in the second video. Both videos start from the perspective of a single problem owner. I call this the monoactosituation. The third video explains the use of systems analysis in a multi-actor situation. Why should you be interested in using systems analysis concepts anyway? Suppose you have discussed the problem of your client. You have used means-ense analysis and an objective stream for problem demarcation. Your problem owner also has given you some idea about the solutions he or she has in mind. But are these solutions suited for the particular problem? Will they work while perhaps other factors and actors outside the control of your client influence the outcomes? Will they have undesirable side effects? Are perhaps other solutions options available? System thinking will help you to connect the different pieces of your analysis and help you answer these and other questions. More particularly use of a system diagram provides a basic structure that will help you integrate and connect the demarcation of the problem area, the analysis of objectives and means of the problem owner. The causal assumptions about how the means affect the attainment of the objectives, the identification of external factors and their possible future influences and the actor and network analysis. Suppose it helps you integrate and connect these elements, a system diagram is an essential aid for achieving consistency in your analysis. A system diagram provides a basic map of all the key elements of your problem analysis and that is an excellent starting point for further analysis and modeling of the system. In search for insights that will help your client choose appropriate actions. Finally, a system diagram, if not too complicated at least, can help you communicate your insights to your client and other stakeholders. Let me now briefly explain the basics of the system or problem diagram. A system diagram is a simple conceptual model that represents the key aspects of a problem situation. Specifies what part of reality is of interest to the problem owner. We call that part the system. The system boundary is generated displayed as a dashed rectangle or a box. What is in the box is considered to be part of the system. What is outside is part of the system's surroundings or system context. Your client is interested in specific outcomes of the system. We call those the outcomes of interest or criteria and display them as factors on the right hand side of the system. We assume the problem owner has certain means through which he or she can deliberately influence system behavior. We represent these on the left hand side of the diagram. There will generally also be factors that influence system behavior but that are not on the control of the problem owner. These factors we call external or contextual factors and we place those at the top of the diagram. By way of convention we portray the means as rectangles and the other factors as ovals. So far the system box is empty. It is useful to indicate how the means and outside factors may affect the criteria. Therefore we portray the causal pathways and system factors through which the means and external factors affect the criteria inside the box. Note that the elements of the system diagram also specify the structure of the so-called scorecard. A scorecard is a table where we put the criteria on the horizontal axis and the means on the vertical axis. If we have sufficient understanding how the system works that is how the means affect the criteria we can complete the scorecard by filling out the individual cells. For example if means M1 will positively affect criteria C1 and C2 we put a plus sign in the related cell. Similarly the minus sign in the first row indicates a negative impact of M1 on C3. If we expect no significant impact we may put zero in the cell and a question mark if we don't know whether there will be an impact or what its direction will be. Now let's get back to the system diagram and its construction. As a first step you explore the problem situation and choose the scope of analysis as explained in the video on problem demarcation. Next you specify the criteria using an objective 3. The criteria will be the outputs of your system. Third you identify key system factors that affect the criteria by using means-ense analysis and causal analysis. Eventually you identify the means that can be used by the problem owner and relevant external factors. Fourth and this is very important you should check your diagram for completeness and consistency. The following questions will help you check the completeness and consistency of your diagram. First does the set of system outcomes at the right hand of your system diagram correspond with the set of criteria you found at the bottom of your objective 3. Second does the set of means at the left hand of your system diagram match the conclusions of your means-ense analysis. Third check whether all the means have an impact on at least one of the criteria. If not the means may very well be not relevant at all. Fourth check whether all the external factors affect at least one of the criteria. And fifth have you identified and included relevant side effects of the means. For example the costs or other negative side effects of using the means are often forgotten. If the answer to any of these questions is no then you should go back, iterate and revise and possibly enrich your analysis. And after that check again. What can you learn from a completed system diagram to sharpen your understanding of the problem situation? First you might find intrinsic dilemmas. Are there any actions or changes that are always good for some goals but bad for other goals? The present of such dilemmas indicates that the client will have to make difficult trade-offs. Second you can get first idea about the effectivity of the client's actions and whether in theory the client can attain all of its goals. Perhaps the client's influence is only minor and other factors or actors will have to be involved. Third you may explore the extent to which the criteria are sensitive to external influences. How uncertain is the development of the conceptual factors and to what extent may such developments seriously affect system behavior and outcomes? At the future situations in which perhaps the client's goals are attained without taking action. You can learn more about this by viewing the tutorial on future explanations. Finally look for critical knowledge gaps. For example some of the relevant causal relations may be very uncertain and this indicates a need for further research. So far I have outlined the principles of using a system diagram in general terms. I understand that much of this may sound very abstract. Chapter 3 of our coursebook on policy analysis of multi-actor systems contains more material. And in the next video I will illustrate the use and construction of a system diagram using an example. Thank you for your attention." "Survival Analysis (Part 1) - Advanced Credit Risk Management Course (Sample Video)","https://www.youtube.com/watch?v=1jf6Yr_see8","Hi there, welcome. As promised in this week we start our mini course on survival analysis. We will see that survival analysis is a very nice way of modeling of studying the probability of default and loss given default. Now survival analysis is a branch of statistics that deals with the study of the random time at which a specific events or a series of events manifest itself. The typical example comes from bio statistics where we are considering the lifetime of individuals. So in survival analysis we typically have two states A and B and we are interested in the event that governs the transition between the two states. So the state A can be life, state B can be death and we are interested in the time in which we have the transition. So the individual we are considering dies. Another example more close to our field is when we are considering a company. This company can be in two states just to simplify, known default and default. So with survival analysis we will study the random time at which we observe the transition that is to say the default. Obviously we can have a lot of other applications and survival analysis is a very important part of both bio statistics and reliability theory and engineering. Now in the usual setting of survival analysis we then have a random variable capital T which is a continuous random variable representing the time at which we observe the transition from state A to state B. So it's the circle time of the event. For us it will be for example the time of the default. We then call small f of t the density of this random variable and with capital f of t we indicate its cumulative distribution function. A fundamental object in survival analysis is the survival function. Now fix a time small t the survival function gives us the probability of surviving at least until time small t. This is equal to the probability of our random variable capital T to be larger than equal to small t and this probability can be expressed in terms of the CDF the cumulative distribution function capital F. In fact this is equal to 1 minus the probability of not surviving until time small t that is to say 1 minus f capital f of t. On your screen you can see two typical examples of survival function. 1 it's an empirical one in particular the one that we cannot obtain using an estimator we will see together the couple of higher estimator and the other one the continuous blue line can be a theoretical one. And this is for example what we can obtain when we use parametric or semi parametric approaches. Given the survival function we can define another very important quantity the hazard rate the hazard rate gives us the rate at which a system surviving at least until t changes its state exactly in t. That is to say assume we are considering a company we know that our company has survived until t so we have seen no default for that company okay the hazard rate tells us what is the rate at which that company will default exactly in t given the survivorship until t. The hazard rate is defined as the ratio between the density f of t and the survival function s of t. Now it's easy to see that the density can be written as nothing more that minus the derivative of s of t with respect to t. Remember that s of t is one minus capital f of t and this is the way in which we represent the hazard rate. If we multiply the hazard rate lambda t by an infinitesimal time interval d t 10 into 0 what we obtain is the conditional probability that our company for example will default in the time interval t t plus dt given that it has not defaulted until time t. Taking the integral of the hazard rate until time t so from 0 to t we can define the so-called cumulative hazard function it's easy to see that there is some nice relationship between the survival function and the cumulative hazard function. In particular the survival function is equal to the exponential of minus the cumulative hazard function. So how can we model the probability of default of a counterparty using survival analysis we first have to define the setting. So in these situations the random variable capital t represents the random time at which we observe the default of the counterparty. More or less as we did in the least model, do you remember? For what concerns the survival function s t these will give us the probability of not defaulting until a time small t that we fix or that we want to study. And finally lambda t the hazard rate will give us the instantaneous rate of default for our counterparty given that our counterparty has survived until time t. When we deal with the modeling of the last given default the event itself is no longer the default of the company but the full recovery of the company. So capital t is now the random time at which we observe the full recovery. So the recovery process is over. With capital f of t we represent the expected recovery rate at times small t and with the survival function capital s of t we represent the expected loss rate in t. Remember that s t is equal to 1 minus capital f of t. For what concerns the hazard rate lambda t this will be the speed of recovery on the un-recovered amounts at time t. In these mini course on survival analysis we will consider together different approaches. Some parametric like the viable or the log logistics models, some semi-parametric like the very important cox model and some non-parametric like the famous capital on myur estimator. But before introducing these last two the capital myur estimator we need to say something about a peculiar characteristic of survival data that is to say sensorine. Sensorine occurs every time we are not able to really assess the time at which our event took place for example the default. Consider the four companies on your screen company one and company three represent no problem for us because we can observe t1 and t3. But for company two and four we have problems for company two for example since we are making our evaluations today we have not yet observed the default. But this does not allow us to say if the default will happen to more role in one week or in ten years. For what concerns company four on the contrary what we have is that somehow this company has disappeared from our data set. So the time we observe the final time we observe is not really the default time but just a time at which the observation has been censored. To be more precise what we have just described is what statisticians call right censoring because we are censoring the end time. Naturally you can imagine that sometimes we have left censoring. For example we don't know but this is quite absurd in our case the time at which a company entered our portfolio or the time at which a company was founded this is something that happens much more frequently in survival studies in biostatistics where it can be difficult to assess the exact age of a patient entering the sample or so on. At you time we will see that we need to pay some attention when dealing with censored observations that I repeat for us will essentially write censored observations. Okay let's conclude this class by saying something about a simple but fundamental tool in survival analysis it's a non-parametric estimator and it's the Kaplan Meyer estimator. On the course platform you will find some extra details for the computation of these estimator using R. The Kaplan Meyer estimator is a non-parametric estimator for the survival function. It is defined for both censored and not censored data and in practical terms it is represented as a series of declining horizontal steps that approaches the truths of survival function if the sample size is sufficiently large enough. The value of the empirical survival function that we obtain will the Kaplan Meyer estimator is assumed to be constant between to successive distinct observations that are often called clicks. Consider and companies and for a case more than equal to n taken to consideration divent times t1 t2 tk that we have ordered. Now let's nj be the number of companies still active just before time tj. In case of no censoring and j is just number of survivors. In case of censoring and j will be the number of survivors minus the censored observations before tj. We dj we indicate the number of companies experiencing divent say for example the default at time tj. The Kaplan Meyer estimator for the survival function capital S t is represented by S hat of t and it's the product for j such that tj is smaller than equal to t of 1 minus dj over nj. Notice that 1 minus dj over nj is nothing more than the proportion of survivors at time tj. This is the simple but extremely useful Kaplan Meyer estimator for the survival function. I thank you for your attention and see you next time." "Water Systems Design: Learning from the Past for Resilient Water Futures - Course Introduction","https://www.youtube.com/watch?v=X1mZOmgqFa0","If you're working with water, you know how complex water systems are. Either there is too much for too little, and often the water is polluted because of human interference. When it comes to water systems, there are many stakeholders involved. And each of these stakeholders has their own values and interests. On top of all this, a changing climate has a huge impact on how we live with water, how we access it, and how we protect ourselves from it. To negotiate this complexity, we have designed a challenging and dynamic course, especially for professionals working with water. We offer tools to look at your water system as something that has developed over time and in a specific context. Get inspired by case studies and professionals from all over the globe, and get to know the past, negotiate the present, and design water systems for a sustainable future." "Machine Learning: Classification Terminology and Basics","https://www.youtube.com/watch?v=6e_CMl1dp0g","In this video we discussed the basics of classification and its terminology. First, let's start with an example of a self checkout machine in a supermarket and how this application could be powered by machine learning. The self checkout works as follows. First, you do your shopping and when you are done, you go to the machine and you scan the barcode of your product. Afterwards, you put your product in your basket. Now, the product is measured using all kinds of sensors. For example, the color could be measured by a camera and the weight can be determined by a scale under the basket. These measurements are our feature factors for the object. Now, a model uses these features as input and will try to predict which product it is called the clause. Is it an apple or is it the banana, et cetera? If the predicted clause agrees with the scan to barcode, nothing happens. However, if the predicted clause does not agree with the scan barcode, the self checkout machine suspects that you secretly switch items. In this case, it could for example send an employee to check your products. How do we make such a classification model? First, you need examples. You need some apples. You need some bananas, et cetera, et cetera, and you measure their features. You take your sensor, for example, you weigh your products, you measure their color, et cetera. You make this data set of examples with features and their labels. This will be our training data. This training data is input to our learning algorithm, which tries to come up with a classification model. This model is usually governed internally by some kind of parameters. This classification model can take a new object as input, it's features as input, and will try to predict its clause. For classification using two features, the so-called two-dimensional classification problem, we can easily plot our model and our data. Our classification model defines the space into different regions for its clause, and these regions are separated by what is called the decision boundary. That is the boundary where the model changes its decision. Our previous example was a so-called binary classification problem. Meaning we have two clauses, apple and banana. In a multi-class classification problem, we can have many more clauses. For example, here we have six clauses, apple, banana, mango, broccoli, tomato, and eggplant. Finally, after training our model, we usually want to evaluate its performance on new and unseen data. For classification, performance is often measured using the accuracy. The accuracy is the number of correctly predicted samples, divided by the total number of objects. This is the percentage of objects that are correctly classified. We can also evaluate using the error rate. The error rate measures the opposite of accuracy. We count the number of mistakes and divide by the total to get to the error rate. This is the percentage of objects that are misclassified. Now here is a question for you to check your understanding of this concept. What is the error rate for the classifier here on the right? Please pause the video and take a moment to compute it, and then we discuss the answer afterward. Okay, welcome back. Here on the right we see that two objects are misclassified. That is, one banana is misclassified and one apple is misclassified. There are a total of 12 objects. Thus, the error rate is two divided by 12, which is approximately 17%." "How Can Airlines Become More Sustainable?","https://www.youtube.com/watch?v=bJiWgfBJ39Y","Welcome to this lecture on airlinable pressions. My name is Bruno Santos and I'm an assistant professor at the Air Transport and Operations Group at our faculty. I would like to start this lecture by asking two questions. The first one is why do we need to fly? Well the capability of flying gives us access to several places and activities which could hardly be reached if there won't be airlines. Think for instance on small islands or remote territories. Additionally even though we have recently learned how to work remotely and travel less, some activities require our presence and there's always the fun of visiting new places. Airlines are also crucial for trade, particularly to transport perishable cargo items that cannot spend a few days inside a boat. The second question is, are airlines really non-sustainable? The answer to this is not necessary and yes. So what do I mean with these ambiguous answers? Well if we look at the three pillars of sustainability, social, economic and environmental, airlines play a virtual role in our society as I just mentioned. They also have an essential role in our economies. So airlines are sustainable according to these two pillars. However they are typically the less environmentally friendly transport option. So in this lecture I'll focus on this last pillar. In particular we will discuss some of the solutions that airlines can follow to reduce their operations footprints. To do this I'll refer to the airline planning framework. These frameworks summarize the different decision process within the airlines. From the strategic level, one to five years before operations to the decision at the time of flying. Let's start with the strategic decisions. Two to five years before flying, the airlines have to make their decisions regarding which aircraft they will have in their fleet. Free planning, particularly free to renewal, is a significant challenge for airlines to become more sustainable in the future. The new, more eco-friendly aircraft and propulsion technologies that we have discussed in this course will only be a better alternative if and when adopted by airlines. But the free-train-new process is a financially demanding problem, representing a long-term commitment for airlines. It takes many years for airlines to renew their fleets. Not all airlines have the purchasing power to do so. So the incentive to speed up this renewing effort has eventually to come from states, either by providing financial advantages for airlines to buy greener aircraft or via carbon taxes or penalties as the course-EAS scheme promoted by the United Nations. This is a carbon reduction scheme to lower CO2 emissions for international flights. The problem with this scheme is that it is still voluntary and does not present a meaningful monetary incentive for a fleet renewing yet. But we, passengers, can also play a role in this fleet renewing urgency. We can decide to fly airlines with newer and more eco-friendly technologies. Let's now talk about airlines network. The network of an airline can be influenced by differential carbon taxes or innovative operational solutions. For instance, research shows that intermediate stop operations in which long flights are split into two or more short segments can reduce fuel burn by 5 to 10%. The savings come from carrying less fuel when the aircraft takes off or using smaller aircraft with lower ranges. More interestingly is the fact that these type of operations can represent a promising solution to overcome the range limitation of for instance hybrid or a turbo-electric aircraft. Another network option is to stop flying short-haul flights and replace them for instance with real alternatives in an integrated intermobile network. This was done, for example, by Austrian Airlines, which in 2020 replaced its flights between Vienna and the city of France with a train option. Originally, by the French government that decided to burn all internal flights journeys shorter than two and half hours. Solutions like these are only possible when a rail option is present and represents a practical alternative to flying. When you move in time, about one year before operations, the airline scale has to be defined. This is the moment that the time table is defined and the resources are allocated to each flight in the time table. According to a study from the start of the century, airlines could potentially reduce by 10% their required fit if they would optimize their scale from a clean slate. The goal will have to be to increase the flight slow factors by allocating aircraft of smaller size and by eventually reducing the flight's frequencies. It will be a lever hard for airlines to decide their scale from scratch and passengers will have to accept less frequent flights between their destinations. Nevertheless, it is not too hard to believe that in the era of machine learning, we can expect similar operational impacts to the ones that were estimated in the study from 2002. Scales can now be optimized for efficient and resilient operations by being capable of predicting and considering future irregular operations. Once a time table for the next year is defined, and it is known which aircraft will be located to each flight, the airline starts the revenue management process. The first step is pressing, which is the process of defining ticket fare classes. And airline, even a local scary, can have more than 10 different classes for the same seats. The revenue management part, managed which fare class, is available at any time to maximize the revenue generated with the bookings. So I can an airline reduce its footprint during this process. I give you two examples of you. The first one is by offering a carbon offset program. When you purchase your tickets at many airlines, you can already voluntarily pay the costs associated with your trip carbon footprint. On the other end, other airlines like EasyJets choose to pay the offsetting costs without asking for the direct contribution from passengers. In both modalities, we need to remember that the carbon offset option is not a solution to the problem, but the mitigation action. The second solution at revenue management is to sell virtual products to replace flights. Some airlines and travel agencies are becoming aware of the need to reduce the number of flights to achieve our climate targets. So they are selling virtual experiences, like virtual conferences or touristyctours, using 360 degree views on your computer or offering an immersive experience by borrowing virtual reality headsets. Another tactical decision is the scaling of the aircraft maintenance. This is a necessary process to keep the aircraft air-working and ready to operate at all time. But it generates waste. Standard steps for more environmental-friendly maintenance operations are to adopt waste management procedures and to use eco-friendly chemicals when performing tasks. But the current challenge in maintenance is to adopt smarter maintenance policies and, in particular, condition-based maintenance policies. The idea is to use aircraft sensors and other inspection techniques to constantly monitor the health conditions of aircraft components, including engine systems and structures. With such policy, the components can be used until their end-of-life, staying in service longer and consequently reducing waste. In addition, if reliable health monitoring solutions are developed, future generation aircraft can be designed in a less conservative way, reducing the aircraft weights and consequently the energy used to perform a flight. Let's move now to the operational decisions. These are performed in the last hours before the flight and during the flight. Here, airlines may decide to operate routes that minimize flying through climate-sensitive areas when defining their flight points. Studies suggest that when following climate-optimized flight plans, the total climate impact can be reduced by 40% with only small extra fuel costs. Still, these flight plans have to be computed several times in a day because they depend on local weather properties and in practice. It is not easy to estimate these climate-sensitive areas accurately. The final solution I present today is that the aircraft are already taken by several airlines to reduce the footprint of their in-flight services. This includes, for instance, the aircraft have zero-ways flights, and is, to compose, recycle, or reuse, or vapor, plastic and food items used in the aircraft on board. Similar efforts, many airlines are also eliminating in-flight sales or choosing smarter ways of selling free-duty products. This is to avoid carrying all the perfumes or other items on the flights, which represent an extra weight to be carried. These are just some solutions that airlines can follow to reduce their carbon footprint and become more sustainable. There are, of course, several other solutions possible. So I invite you to share other sustainable initiatives and ideas you may be aware of in the form. I'll see you there. Thank you." "Road Safety Course - Introduction Video","https://www.youtube.com/watch?v=3X_FPF_soQU","Do you think speed limits are a bit arbitrary? Is it easier to keep to the limits on some roads rather than on others? And have you ever felt that the road signs in signals and other markings were so complex that is difficult to focus on the business of driving? I am my own handsaker and together with my colleague Hanim Farah, I work at the University of Technology, teaching and conducting research in the fields of road design, road use of behavior and traffic safety. Together we run the traffic and transportation safety lab where we investigate these topics. Our aim is to improve traffic safety by bringing together the fields of engineering and behavioral sciences and by developing models and measures. We study all kinds of road users, from cyclists to drivers using automated vehicles and we collaborate with various national and international partners to tackle a wide range of topics and societal challenges. We have developed an online course in which we analyze how road safety can be improved by looking at wide range of factors. For example, how road infrastructure can be designed to better fit human needs and abilities or how to increase the safety of road users through education and training. This course builds from the fundamentals towards a multidisciplinary perspective on these issues. This means that it's useful to both newcomers looking to build a strong foundation in this field and experience professionals interested in gaming novel insights or new knowledge outside their area of expertise. The course consists of videos, assignments and interactive feedback sessions with course instructors. That's us and our colleagues. And through real life examples you will learn how theories and models can be applied to your actual situation. Does this sound like the course for you? Visit our website and find out more." "A Case for Policy Analysis Water Strategy and Planning Program","https://www.youtube.com/watch?v=nSamaSYPF-o","Problem formulation is critical for policy analysis. Solving the wrong problem is one of the most well-known pitfalls, and unfortunately it is quite common. Before you invest in any further problem analysis, you will want to do a first check. To see if you indeed can expect that your topic could benefit from policy analysis. This check is fairly easy to do once you know the key components in a policy problem. Let's look at them one by one. First, we speak of a problem when there is a gap between the desired situation and the existing situation. Or when we expect a future gap between a desired future situation and the expected future situation. For instance, 30 years ago, climate change was a problem, because rising temperatures were expected to change future climate conditions, leading to numbers, undesirable consequences. Today, you could argue that climate change is a current problem, with more and more extreme weather events that lead to loss of life, economic damage, and other undesirable consequences. These are relevant gaps for a policy problem. Not everyone may worry about the same gap. What, for one organization, maybe a problem? Maybe an acceptable situation for another. In short, the problem is subjective. Different stakeholders will see different desirable futures, and therefore see different problems. You have to be explicit about your problem owner. Is this the ministry for environmental affairs or the ministry for industry? It will make a difference. In a policy, we formulate actions, strategies or regulations that we expect will help us to improve the situation. We do this to close, reduce, or prevent a gap. For instance, to prevent damage from flooding. This means that for a policy problem, only a problem owner and a gap are not sufficient. We also need to have some influence on this gap. Some actions we could consider to solve the problem. This influence can be directly, but also indirectly. And this brings us to the next element of a policy problem, at least for policy analysis. We have a gap. We have actions or policies to influence this gap, but to require further analysis, we also need a dilemma. A dilemma created by the presence of different alternative actions, each of which will have partly desirable and partly undesirable consequences. There are trade-offs involved. Finally, policy is all about coordination or collective action. This can be coordination in a private company between different corporate units or staff members or coordination in the public domain between multiple actors. This means that besides your problem owner, also other stakeholders. Or actors will be involved in the problem. We will use both to refer to organizations, groups or individuals who play a role in a policy problem. Either because they also care about the problem, because they are supporters or opponents for your problem owner, or because they are needed for a successful solution. Or maybe they are likely to be affected by a solution. Or because they are causing the problem. So, in conclusion, your very first step as a policy analyst is to check that you are dealing with a policy analysis problem. And to do a first check against the key elements, problem owner, gap, actions, trade-offs, causing a dilemma and actors or stakeholders. If you have checked that those elements are there, you are ready to start." "Querying and Exporting Relational Data (SQL)","https://www.youtube.com/watch?v=BQtHIbRy4-g","Welcome to the introduction to data management for machine learning and AI. In today's session, we will cover query and exporting data from a reliable data base. Why would we be interested in doing so? The real and is that most introduction courses to data science or machine learning work with comma separated value files. Also, many training data repositories like Kaggle would distribute data sets in the CSV format. Well, this is perfectly fine when training or learning to develop machine learning applications. But as soon as you end up being in the real-world scenario, that you need to extract data from an existing database, you need to be able to deal this relational databases. Of course, this very short lecture cannot fully teach you how to do this. Instead, I will try to give you a short clip and to how to query a relational database. But when the time comes and you must do this task for real, you hopefully remember which things to read up on. To summarize, the goal of this lecture is to provide you with a quick overview into SQL. The structured query language for relational databases. Became the news SQL queries to aggregate data from multiple tables and export some to comma separated value files or an X-o file. Alternatively, you can also feed such query results directly into a Ponder data frame. You'll probably be able to learn about data frames and other courses. However, keep in mind that the key to mastery of SQL is hands-on practice, so exercising is very important to you. Be willing to be using real data found in the music brains.org database for today. Music brains is a community-driven and a secret media about music artifacts, like artist recordings and albums. The data used by the music brains website is sent stored in a relational posthress as Google database management system. Of course, using a proper database management system is an important to power such a popular website as potentially thousands of page use will be needed to be served per minute. The performance and reliability features provided by such a database management systems are a key towards that goal. We will be using a local copy of the music brains database running on my own system, which is possible as a data is available for download under open access licenses. As a site note, here, there's also a web service API interface available, which allows you to clear a music brain state of a search replicating the full database locally. We will not be using this API today, but keep in mind that many web platforms provide such access, which can be very convenient and certain use cases. Check the API out later. This is an example page of the music brains website, following the detailed information for the composer John Villagames. Here, we see a list of all John's recordings and some extra metadata about him as a person. Our plan for today is us follows. We replicate all the music brains data into local database management systems, and then run SQL queries to create various single table data files for data experiment and analysis. For example, using Python and Pandas. Sounds easy, right? But let me remind you that typically 80% of the time in machine learning projects is consumed by data management tasks. Maybe a little anecdote from when I was preparing for these slides. I assumed, naifly so, but downloading and importing the music brain data into my own DBMS, and getting some simple queries done, would maybe take me about 20 minutes. However, three hours later, it was clear that this was overly optimistic. So what happened here? Ah, it was actually just a usual staff. Part of the import scripts are used for outdated, they have version conflicts between different software libraries I had installed in my system. The documentation of the script was incomplete and outdated, and finally after I managed to replicate all data, it turned out that the schema used by the database was very complex, and it took me a long while to actually understand it. Also, some of the data I hope to get was not available, as it was restricted due to privacy laws, for example, rating given by individual users. I would argue that many of these issues are, well, actually somewhat normal, and it is not uncommon that dealing with data import tasks takes much, much, much, much longer than you would might think. But let's release these tablets behind and have a look on what we get. We get over 2 million artist records with over 27 million recordings, and over 3 million releases. Relesis are things like albums or singles. This is also a lot of data in sheer volume. The compressed data dumps I downloaded were 5 gigabyte, and after extracting and importing all these dumps, they take out 33 gigabyte on my hard drive. This is actually quite a lot of data, and it's certainly way more data than you would typically want to manually handle, and as commerce-appredit value, text file. And I certainly would not recommend anybody to even try to clear such data directly from a file without using the awesome powers of a database management system. Here you can see an example, logical schema for a music print style, relational database, I was using in one of the previous videos. It looks very easy to understand. Artists are connected to recordings, recordings are connected to albums, well, and that's essentially it. This, of course, was just a toy example. The real scheme I used by the music print database is much more complex than that. It actually uses a minestagering number of 296 database tables, and if you want to export some extra import some extra metadata, you can easily add another extra 50 tables. The online documentation of the schema uses this extremely simplified visualization, skipping over a lot of detail. But this diagram is good enough, and you're scared of long enough, to get the first understanding on how the database is structured in general. I also tried to create an automatic visualization of all these 296 tables used in the schema, which was often in this super interesting piece of art, which you can see here in the middle, or can't, because it's just too tiny. I even gave out trying to understand this way to many tables. Why am I even talking about this? Well, my takeaway point is that real data, used by real system, is often very big and very complex, and by far exceeds what you typically see in small toy examples used in an educational setting. Also, many of the data sets you find in repositories like Kaggle actually have been carefully pre-processed, curated, and simplified, before being shared. They are not realistic for things which should find in real relational databases and real systems. On this slide, however, you see the visualization of the real data structures. Now, before I fall back to using my toy examples myself, let's select this thing in and be prepared when the time comes when you face the real data this is similar level of complexity. Good. It's toy example time again. I will make a simplification to the music practice schema which looks like this diagram. I'm only focusing on the entity types I'm interested in. Patients, in this case, would be artists, recordings, releases, and release groups. Release groups are things like albums and things that put from a conceptual viewpoint. A release group could, for example, be the album Star Wars trilogy soundtrack, and release groups can then have multiple releases, which typically happen in different countries or different mediums. For example, there might be a German CD box release of the trilogy soundtrack album, or another release in an old school fashion using vinyl. The fascinating part about this simplification is that I could do it completely within my local database management system using a concept called views. I will not go into detail here, but views are virtual tables, which can be defined using SQL queries, which send dynamically loaded data from other existing tables. So essentially, I created the simplified logic as schema, but when I'm using it, the BBMS would translate and map all queries I'm going to do to this is the significantly complicated, more complicated schema with a 296 tables you have seen in the previous slides. And I wouldn't notice this at all. This allows me to run simple toy examples on the next slides. And then still use a full extent of a 2 million artists, 27 million recordings of the real database. Okay, let's now introduce SQL. The standard structure career language, which is used by all relational database management systems. As a side note, many people pronounce SQL as SQL, but they mean the same thing. I do not expect that you can understand SQL just by listening to me today. You need to read up on additional concept and also create practice, practice, practice and order to get it. However, I can provide you with an overview of the general idea. First off, SQL is a declarative query language. This is vocabulary inspired by informal English query. Declarative here means that you describe what you're looking for this query, but not how to execute that query efficiently. This is one of the most awesome features of a relational database management system. The question of how to execute an optimized query such that is as fast as possible is automatically handled by the system and you don't need to do anything here. The basic syntax of SQL looks like this. Select attributes from tables, their condition holds. Of course, it can do so much more. It can also join multiple tables. Compute statistics can create aggregates and can run subqueers or set operations. It's a very basic is just this. Select from where. Let's try our first example query. Find all the recordings of shun volumes and return their ratings. How would that work? Well. Select on volumes from the artist table, but at the same time also select all the recordings which are connected to june volumes. This connection is expressed by this intermediate table here in the middle. Between artists and recording and I call this table for now link artist recording. This table contains artist ideas and recording ideas. An artist if linked to a recording, if this table contains the idea of that artist and the idea of the recording in the same row. But let's start simple. Let's first find John Williams. The SQL query we create for this would be select star from artist where name equals John Williams. Star here represents that I want to return all attributes of the artist table instead of selecting a specific attribute. When executing that query on my data replica, it turns out that's here is quite many artist core John Williams. Too bad. Thanks for never really that easy. By looking up some of these John Williams on the music print websites, I found out that the John Williams I was looking for is the one here busy ID 94. Just to make it clear, I'm looking for the John Williams, the famous composers who made the soundtrack for movies like Star Trek or Netsari Star Wars or Indiana Jones and not the John Williams, the unknown sound technician who makes the city recording of a local high school bat. But now that I found him, how do I find all the recordings connected to John Williams? I've now just showed this SQL query statement on you and later when recapping this lecture try to analyze it more carefully. What are the main parts here? We've selected from the artist table where the artist ID is 94. Additionally, we join or link all the font records such that the artist ID matched the artist ID found in the link artist recording intermediate table they have seen before. Combining tables such that the ID matched in this way is what we typically call a join, so we join these two tables. Then we do exactly the same thing for the recording table. Finally, we decide to only select recording ID, name, length and rating, but not any other kind of attributes. We can see the result in the table on the bottom right and this can easily be exported to a comma separate value file, an axiophile, or maybe even directly loaded into a punter state of frame. This is pretty cool in easy. Let's try another query. This time trying to find all of John Williams albums. Here means release groups where the type is album. It's the same idea as before. I start by selecting from the artist table the artist which have ID 94, which should only be one. Then I join the artist table, this is link artist release table, join that with release table, and join that finally release release group table. And at the very last, select only those where the release group type is album. I also got a little bit lazy here, my example, and used the feature of S2L for defining short cut names for tables. To instead of always typing L artist release, I temporarily renamed for this query only this table as LAR. On the artist table, I simply just call, A here. Again, I would recommend you to look up several other examples of S2L queries, read up some extra documentation, and definitely try some of your queries yourself. Let's make another observation about some of the awesome features of relational database management system. I told you before that the data by Abyss I replicated has over 27 million recordings and two million artists. The queries I just showed you were only selecting one specific artist out of these two million, and then trying to find all the recordings of this artist, which should be around 6700. Out of 27 million stored in the system. And a data set that actually takes so this week I got by it on my heart. And all of this just happened in 69 milliseconds on my desktop computer. I assume that sometime in the future, you will be experimenting with Python and Pondas. Then try to find a specific record in a slightly larger comment-related value file, and use a stopwatch to check how long this takes. I would bet good money that it would actually take much longer than 69 milliseconds. This is one of the reasons why I would want to have a property to release my database management system, then doing complex queries, because over 50 years of relational database query processing research and development, have taught us how to execute this query really, really, really fast using all kinds of internal optimization and tricks. However, so whole point of this exercise was to extract data sets, which contain exactly the type of data you want to need for your later machine learning application development. So all you now hope fully need to do is exporting the results and then loading them without ever clearing them again. Or maybe to refresh that much more strongly, if you ever end up in a situation where you have multiple extremely large comma separated value files, and you try to join them in Python, something that horribly wrong in your data preparation pipeline. Good, but let's get back to this in exports. Like this file here, it contains all the recordings of selected artists, well, just one in my toy example, together with their ratings. With some extra data, for us like this could, for example, be fed into training of a music requirement system. So, what did we learn today? SQL, the structured query language, can efficiently inflexibly query relational data. It can be executed very fast, and the result of an SQL query is always a single table. This table then can be easily exported, for example, into comma separated value file or an exophile, supporting downstream AI and machine learning applications. Thus, what we have seen today was a toy abstraction often ETL, extract transform load pipeline, extracting data from an imaginary enterprise database system for later analysis. What should you be doing now? Well, definitely read up some more advanced SQL concepts and practice a bit. And this is, I thank you for your attention today, and hopefully see you another time. Thank you." "AI and Machine Learning Data Sources","https://www.youtube.com/watch?v=C-IS-Y7i1M0","Welcome to this introduction to data management for machine learning and AI. In today's session, we look at the question of where the machine learning data comes from. As a reminder, as part of the machine learning workflow, we need to obtain training data to train our machine learning models. This training is additionally guided by continuous validation using validation data, and finally, the model is being evaluated using the testing data. Only if the model shows sufficient performance on the provided testing data, we can deploy the model. But where do we actually find or obtain that data? With a pre-fli-discussed five approaches, obtaining machine learning data. In short, these are, one, reusing existing training data made by other projects. Two, buying and getting data from external data repositories or data marks. Three, making data sets from internally available to data in your own project or enterprise yourself. Four, employing external workers or people in data farms across our simple platforms to make the data for you. And finally, five, making artificial or synthetic data using AI techniques. Let's have a look at the first option. Reusing open machine learning training data. This is probably what most inspiring machine learning developers will do during that training and education. Instead of making your own training data set, you simply reuse data sets made by other people for other projects and other contexts. For many standard tasks, they are actually several good standard data sets around. But the problem is, what happens if you do not have a standard task? You can then, of course, still try to reuse some of these data sets, but then the big question is, if that data set is still relevant to your task or not. But as you're watching this introduction video, you'll probably still in your education and training phase. The reusing some remade training and test data is a great way to quickly get a project going in order to practice and develop some machine learning development skills, without being bogged down by data management issues. But where do we find such a reusable data? Well, a common data source is Kaggle, a platform which specializes on machine learning competitions. Three made data sets are provided with a specific task and goal, which are a great way for gain some experiences as a developer in training. Some of these data sets can also be relevant and other projects and can be reused. But of course, or the most AI and machine learning courses at universities will provide you with some data sets to use for practice and training. But there are more platforms like this. As for example, the open data repository of Amazon AWS or the Open Data Search hosted by Google. Just take some minutes and browse around in these platforms and see what you can find. The second approach is to extract data from repositories and data marks and make your own training and testing data from them. Such repositories are often commercially offered by multiple companies, but also by many government institutions. However, these data sets are typically not specifically made with machine learning training data in mind. Their data sets, which you can then buy or obtain for free, typically containing summarizing statistics or factual data on certain topics. Like for example, the average air quality in the Netherlands on a daily basis on a once-clear kilometer resolution for 2022. It's a very valuable data set, but it's not ready for direct deployment in a machine learning system. Extrructing relevant data from these repositories and turning them into a suitable test and training data set is a hard and time-consuming task. Me being in the Netherlands, I use an example, the CBS, the Santa Barrow first statistics, which is a governmental institution in the Netherlands collecting and creating statistical data for the country. The most of the data sets here are shared for free, but many more of them can be commercially obtained or can be licensed for research purposes. Well, at least if it's legally restrictions allow to do so. However, many countries also adopted an open government and open data policy. This also applies to the Netherlands, thereby political mandate, all data sets which are collected in which are not restricted by privacy, safety or other concerns are to be shared with all the citizens. Many of these data sets can be found in the data repository of over-hide.nl, and I would invite you later to browse around in that repository and see what you can find. Or check out the respective repositories in your own country if you're not reciting in the Netherlands. To approach number three, if your AI project is attached to an existing software system or embedded in an existing enterprise context, you might already have data available yourself. So for example, if you run a web shop, you'll probably already have data on customers, the items you're selling and the purchases customers do. Similarly, if you host a community side, you'll probably already have data about your users, the contents they create and the interactions they have is each other. This data typically resides in your enterprise data management system, which also happens to be quite often a relational database system. The big challenge now is, how do you extract data from these systems and transform them in such a way that they can later run below it into your machine learning lifecycle? However, I also want to point out that it's not just a technical problem of how to design a data extraction pipeline or how to transform and load your data. Data might be reciting in your own system being collected by your own application, however, when that if that data was collected from people like your customers, many legal issues can arise and you can simply not do whatever you want with your data. Because it actually might not be used at all. These restrictions are extensively covered in many, many regulations like, for example, in Europe and the general data protection regulation better known as GDPR or in the new European data. Look, these things up for more information. When talking about extracted data from other systems, you will often hear the terms pipelines or ETL processes. ETL stands for extract, transform and load and describe the process where data is being extracted from an external database, clean and then transformed and integrated into a form that better suited for the intended use. And then finally loaded into different systems like, for example, machine learning training process or a data warehouse. Creating effective and efficient data pipelines is a field of work with many good job opportunities, so maybe pay attention to these things. Approach number four, which we want to discuss today, would be the use of data firms or crowdsourcing. The problem we want to address with this is that the data we would need, this not yet exist at all in any repository, but needs to be made specifically for you and your project. This commonly happens if you have unlabored data from some other source, but like the labels in order to use it for machine learning training. As an example, consider that you run a community forum where people can have online discussions, but you want to moderate for hate speech and harassment automatically. Of course, it would be easy to extract and label examples of people text posts from a central database underlying your system. But in order to detect hate speech, you would now need to annotate each post if it contains hate speech or not. Unfortunately, understanding the finer semantic points of human language is a rather complex task and typically requires another intelligent human being to do so. But how would we efficiently and effectively label, let's say, 100,000 text posts collected from your forum to train a classifier? Of course, you can do this yourself and then take five years, or abuse your employees or students to do this for you. But more commonly, you will look towards cloud sourcing or data farms. Data farms are one step solution, large businesses, to which you can outsource the whole data labeling or data cleaning problem, and they take care of it using their own workforce and you simply pay the bill at the end. In contrast, cloud sourcing means that you are recruiting paid or unpaid volunteers typically using some kind of internet platform and they perform the task for you. Let's talk about cloud sourcing first. They are multiple approaches to doing cloud sourcing and one of the more older ones you have probably already encountered several times when crowding's internet. You can see here an example. This would be piggybacking, a crowd sourcing task on top of another task. This is one of the later variants of recapture by Google, who is primarily purposefully is to decide if the user of a website is human or a machine. But at the same time, it also creates label training data for different computer vision systems. So, essentially, every time you are solving one of these capture challenges to prove that you are human, you are actually also helping a self-driving car to get better at its job. Another common approach is to use your volunteer community to get the data you need. There are many platforms outside which invite you to provide ratings or reviews for various services and products. And this is typically a very good way to get such labels at scale. Here, as an example, we see movie ratings on the movie DB. The more traditional approach to crowdsourcing is paid Microsoft, paid micro task crowdsourcing. There are many platforms which do that, as for example Amazon Mechanicatoric or prolific. The idea is that you make small web interfaces in which people can perform your data task, like annotate or label your text. And then these tasks are distributed by the crowdsourcing platform to large number of users who then can volunteer to do your task. And then you typically pay each person for their contributions they make. So, last approach I want to address today is crowdsourcing as citizen science. Typically performed by research institutions or public agencies, motivating citizens to contribute data to ongoing research. For example, the Dutch government and collaboration with the Naturalism Museum in Leiden runs a study on infect biodiversity in the Netherlands every couple of years. This study requires data which simply doesn't exist yet in any form. Therefore, Dutch citizens are asked to go outdoors and for a day or two count all the different insects that encounter in their gardens and parks and submit this data into a central platform, essentially creating a new data set using all these volunteers. Now, this brings us to data farms. Data farms share many crowdsourcing ideas, but are typically large commercial businesses which build upon their own data workforce. This really is a big global business with many implications and ramifications from an economic and labor perspective. It is not uncommon to set up these data farms in low income countries and then hire a large workforce which annotates and clean sub-data as a main full-time job for other cheap. There have been many discussions that such cheap access to labeling work can provide a competitive edge for employing AI systems, as for example pointed out by this article by the economists use on the left. Also, data farms, due to the remote work nature, would allow us to bring labor into regions where usually not many paid jobs would be available as mentioned here by this financial time article on the right. However, data farms also open their opportunity for different shades of labor abuse by taking advantage of more relaxed labor laws as is also frequently discussed here, an example by the New York Times. This type of data labor is still in New Phenomenon, which is not yet well understood and regulated. For example, there are recent discussions about debating the psychological impact of doing data work. Just imagine, it would be your eight hours per day job to provide labels for image moderation and online platform. During these eight hours every single day, all you do is staring at pictures of violence, gore or other explicit content most people wouldn't like to see in their online platform. There are several such workers who report the developing psychological disorders from doing this type of job, like for example developing sweeping disorders or developing depression. While I personally wouldn't know how to regulate or handle this data farm style work properly, let's just take away from this brief discussion that says a lot of potential for future debates and regulations. Just the approach I want to discuss today for obtaining data is relying on artificial or authentic data. This is a rather pleading edge approach, which is actively discussed on the professional academic community right now. And the core idea is to train an AI system to represent the type of data we're interested in and then generate artificial examples of the same type of data. So essentially, you use an AI system to create training data, which can be used to train another AI system. Why would this be a good idea? Well, modern machine learning algorithms, especially deep learning based ones, require an humongous amount of training data. And these scenarios having more data available is always better than having less data available. Even if the quality of the data is slightly lower than that of manually created data. This type of approach also works well for augmenting existing real data set, is additional data points in order to increase their size, for example, for balancing underrepresented classes. Again, keep in mind that using synthetic data is a bleeding edge academic approach, but let's play around a little bit and see what we can even do with existing techniques. And this example, I tested GPT2, a synthetic text generator to create some synthetic reviews for movies. In particular, we'd use a language performer, which is hosted by HackingFace, and this performer, a transformer uses a starting phrase, given by me, and then completes it what the model perceives as realistic English language. Keep in mind that such artificial generators typically do not understand what they are doing. It's just a language model, which relies on learned language patterns, this no real semantic insight. So, let's assume I need to create training data for negative movie reviews. I start the generator with a phrase, this movie is a horrendous piece of, this is some extra type of shown in. The resulting review sounds quite realistic. This movie is an absolutely horrendous piece of filmmaking, one that is filled with absolutely no story, no characters and no real sense of purpose, and then it just continues to insult the movie some more. And in the last sentence in the concludes this, I cannot think of a better way to spend my Saturday and watching it. I'm not really sure if that's a good example of a movie review, but it's certainly uses good language. Anyway, such data generators are also interesting to use to generate all kinds of text, and I would invite you to play around with some a little bit yourself, just checks a references on this slide. Just keep in mind that these things indeed have no idea what they are actually doing, and despite using proper English language, will move often they're not generate meaningless or actually outright wrong texts, which can be quite harmful when used in the wrong way. Just look at this example. The Netherlands is a hotspot for international track trafficking and AI model continues. What now is being generated by the AI is something which sounds very much like a proper news report using most improper English language and proper grammar, but it's actually wrong on nearly all the statements it makes. Just check it out. Please don't use this technology now to build an automatic fake news internet troll box. Similar technologies are also being employed for generating artificial internet images. Here are some examples based on mini-doll E, a VKA version of the DOL E model. I try to make some images for completely ridiculous training tasks like recognizing penguins and a car or recognizing bicycles made out of cheese. Well, judge for yourself is these results are actually any good. However, more powerful model than being adjusted correctly and used in the right context can provide surprisingly convincing training data. See for example here, but officially generated images of road signs is a plus strawberry on them. Such images could maybe be used to train a self-driving car on recognizing a new sign being introduced before the sign is actually available. Anyway, as a takeaway, many experts for example here the Gardener group believe that synthetic training data will play a central role in future AI system and will overtake the use of real data involved in just a couple of years. To conclude today's session, what did we learn? Good data is hard to combine and getting it might be potentially very expensive and time consuming. There are several approaches to obtain data. Summarized, we can reuse data from others. Extracted from external data repository. Extracted from your own data. Use large crowds to make data for you or fake data using AIs. And with this set, I thank you for today's attention and hope to see you another time." "Data Visualization in Python - Seaborn","https://www.youtube.com/watch?v=rgWkTMSkdGA","Next, I will explain some common statistic plots and how they can be generated in C-borne. To demonstrate how to generate plots in C-borne, I'm going to create a new Python notebook called Vs. With any important necessary Python libraries. And though the two data set you already familiar with. I need to merge the airport data frame to the fly data frame, DF. This is a resulting data you already know from earlier. First is a kind of all plots. The bar plot. The bar plot is probably the most used kind of plot. It is used for comparing different amounts. The figure on the left shows a simple bar plot. By the right-hand side plot indicates aerobiles showing variations due to other features. To create the bar plot, we first get the top 10 airport and top 20 aircraft types. Outside of the United States. Then we query the flights only for this airport and carry out by the top 20 aircraft types. Finally, to prepare the data for the first bar plot, I generate the data containing the flight count per aircraft type. With the C-borne bar plot function, we can provide the data. The feature for the X-axis and feature for the Y-axis. And voila, we get the not so bad looking bar plot. First, let's remove the unnecessary color coding by making everything blue. I also like big figures which have better resolution for scientific papers or reports. Let's change the size and assign better names for the both X-axis. The new problem now is that the font is too small for my liking. So I always said the Matplotlib font configuration was a new size of 16 points. I also like the Ubuntu font over the default Matplotlib font. So I just said that as well. Now running the bar plot again, I have a nice plot which is much more clear. The second bar plot will show the flight count per aircraft type considering different countries. For that, I grew up the flight by a typecode and country. Then I create the bar plot. With the same feature for X and Y-axis, the Aero bars are shown. This represents the variations of the same aircraft type among different countries. Line plot is often used for showing trends, especially trends between two discrete data points. Here you see three line plots showing the number of flights departing from a nail board. The first one shows the departing flights during a 24 hour period for all Mondays. The second one shows the confident interval considering the variation over different weekdays. The last one shows two different groups indicating the number of departures and arrivals at the same time with their own confident intervals. With the line plot, I want to show the trend of number of flights at Amsterdam Airport over the 24 hour period. This block of code allows me to get such trend for both departure and arrival for different weekdays over the entire month. The theme for nine plots looks only at departing flight numbers on Mondays. We can quickly visualize the trend by providing the feature X as our and feature Y as count. The extra statements make the plot larger with better labels. Next, we use all the data for generating a line plot. Now the result also includes a confident interval, including variation in both departure and arrivals, as well as variation across different weekdays. Next, I want to separate the arrival and departure flights and see how they are related to each other. In this plot, we can simply specify the hue parameter as kind, indicating whether flight is departing from or arriving at the airport. Now we have the two different trends, representing in the same figure, using line plots with confidence intervals. Another common statistic of plot is histogram. It shows the distribution of a value for certain parameter. I'm sure you all familiar with histogram. In Thibon, there's another similar plot. Colonel density estimation plot. Also known as KDE plot. It estimates and shows the probability density function based on the data. Box plots are used to compare distributions. Here we are showing the variation in number of flights per hour, cross different airports. Similar to the previous nine plot example, we can also display two groups side by side. In this case, the number of hourly departing and arriving flights per airport. It is a common task to visualize distributions in C-borne. For this task, I'm interested in the top 15 airport. You can see most of them are from the United States, indicated by starting airport code K. I use this block to obtain the number of departing and arrival flights by hour for this airport. The data to be visualized is presented here. It includes the number of flights, the airport, and the type of flight. This block of code contains a box plot function which shows the flight number for all airport cross different hours of the day. We specify the X feature as airport and Y feature as count. So the box plot is presented vertically. Next, if I specify the hue parameter again, I will get a group box plot. Which shows the departure and arrival box plot side by side. Similar to the box plot, the violin plot can also visualize distributions. Instead of showing the descriptive stats, the violin plot uses conauthicity estimations to approximate the probability functions. Sometimes it can provide an elegant way of showing two groups of related distributions, like the one in the second figure here. Similar to the box plot, we can visualize different distributions with a violin plot in C-borne. The code is almost identical, except that the function name. And that's a final result. When showing the departure and arrival stats side by side in the violin plot, we can again assign a hue parameter. I set split to true here and give it a binary categorical column map, set two. The inner quantiles parameter allows quantile lines to be drawn in the violin plot. Then, a reformat and run the cell. And finally, we have this nice grouped violin plot with indications of quantiles. If you want to show distributions of two related parameters, the joint plot is the best choice. In this slide, you can see two different representation of the joint plot. The first one is similar to the one-dimensional histogram. While the second one is two-dimensional kernel density estimation plot. Joint plot allows you to visualize correlations between two different features. To demonstrate the KDE plot, I will recall the Cuck-Distem function. We calculate the duration and distance for all flights departing from Amsterdam. Then, I want to plot the distribution of duration using the common histogram. I specify the Bing-Wiz parameter to 30, which grouped flight duration with 30 minutes intervals. Now we have the histogram. Similarly, we can generate the KDE plot. We can see the result is consistent with the histogram. Next, I want to show the relationship between duration and distance by using the joint plot from C-Boar. I specify the duration and distance as feature for the X and Y axis. The result in figure shows two histogram on both axis. And the relationship is shown in this scatter plot. Apparently, the scatter plot is not the best choice here, since there are many overlapping points. We can replace the kind parameter to KDE. So to show the 2D KDE plot, this process takes a bit longer, since the C-Boar has to perform the kernel density estimation first. Once it is complete, we can see the contour plot in the center and the single KDE plot on each axis. The last example I want to show you is the relational plot in C-Boar. This can be used if you want to visualize the same parameter over different groups. The relational plot automatically generates different sub-lots, and ensures consistent ranges for the axis for different sub-lots. In this demonstration of the relational plot, I want to show the hourly trends in departing and arrival flies across major European airports. These are the top airport codes I'm interested. The structure data is the same as a one for line plot. We see one real plot function, we need to specify extra parameter. That is a column feature. This way, it's small plot in the figure represent an individual airport. If you run this code, we get only the scatter plot in one row. This can certainly be improved. First, I want to have two rows that is four column per row. And I also would like to use line plots instead of the default scatter plot. And finally, we have a nice relation plot showing fly trends across the top major European airports." "Ruralization Course - Introduction Video","https://www.youtube.com/watch?v=u4XMqvYG3G4","We are an area that often considered an attractive because there are less jobs, less opportunities and some sectors such as agriculture are in crisis. A agricultural land is concentrated in the hands of a few oats. There are a few land transfers, lead prices are high and the quality policy is fed to pay from farm and large amounts. Access to land is there for the number one challenge for new energy to farm. Royal areas can possess innovative solutions to problems of rural decline, as well as the initiatives that supports rural regeneration. There are many types of manifestations of alternative futures, futures, images, scenarios and visions. Trend analysis is one of the methods to create these. Now at this time to learn how to develop these actions into a coherent future strategy for your rural region." "The Principles of the Circular Economy","https://www.youtube.com/watch?v=BknvimtSQjk","Welcome to this MOOC about the circular economy and thank you for joining us. In this first lesson we will talk about the main principles of the circular economy. As you saw in the video the circular economy is inspired by living systems. So what are the characteristics of living systems? And how do they relate to our man-made systems? First of all there is no such thing as waste in living systems. One species waste becomes another species food. A dead owl turns into nutrients for the soil. How can we replicate this idea in our man-made systems? If we redesign products so they could be reused or disassembled at the end of life, we could keep those products and their materials at their highest value at all times. At its core the circular economy is about waste equals food. This is the first principle. Secondly, living systems are diverse. Many, many species contribute to the overall health of a system. Greater biodiversity supports a system at a time of shock. An economy, a nation or a company can derive greater value from diversity by sharing strengths and having a greater pool of resources to draw on. Such a system would also be better able to bounce back from disruptive events. This second principle of the circular economy is build resilience through diversity. Thirdly, living systems are powered by renewable sources. In this case the power comes from the sun. If we are to build a circular economic system to work in the long term that we need to draw inspiration from the third principle, which is to work towards energy from renewable sources. A circular economy isn't about one firm changing one product. It's about many actors working together to create effective flows of materials and information. With everything increasingly powered by renewable energy. What I have just described here is an example of a system we could create. When we think in systems we begin to see the connections between people, places and ideas. We become less likely to be surprised by negative consequences of poor planning. Instead we begin to see how we can create opportunities to generate economic, environmental and societal gains. This is why we need to think in systems and that is the fourth principle of the circular economy. In this world of increasing demands, a linear system simply will not work. Governments, companies and societies are looking to the circular economy as a way ahead. In our next video we will explore why we need a circular economy." "What is a Circular Region?","https://www.youtube.com/watch?v=zs8tDLG3ta0","Hello! In this video we will be defining the concept of circularity, taking a look at its origin and key components, and considering how circularity translates to our scale of focus that at the region. You will find out that although circularity can be understood as a scale free concept, active strategies to achieve it with a from a governance or a design perspective do require a scale of focus. The set about this introductory video is as follows. We will first define circularity and the concept of circular economy, linking it back to a sustainable development. Next we will focus on the necessary systemic perspective and inclusion of sustainable equity. In the final part we will explain the concept of value change and how to translate them to general circular strategies and explain their multiscalarity and the importance of the regional scale also showing how scales interrelated by using an example from the food sector. But let us first of all start by placing it in its context. As you already will know, sustainable development requires a systemic approach. Our society is dependent on all kinds of resources such as materials, water and energy and therefore is highly dependent on the infrastructures which carry them. The complexity of these networks is every increasing and their vulnerability to change is increasing in parallel. Drivers of change include resource depletion as well as climatic, demographic, political and technological factors. There is a growing consensus that the world may be entering a period of scarcity. So it is more critical than ever to adopt a sustainable approach to development. In this complex setting of so many drives of change, people, institutions, policies and systems are crucial starting points. An important step forward occurred in 2015 when the world embraced the 17 sustainable development goals or SDGs which clarified that political will and community acceptance are essential for sustainable development. Though a component of other SDGs as well, SDG 12 in particular stresses the need for greater circularity specified as responsible consumption and production. As you can see here in the five-year evaluation of the SDGs we are actually lagging behind with regard to SDG 12 and several others. Now by taking a look at closer look at regions and in particular city regions we can distinguish many resource and waste flows as shown here, for instance, in the waste flow diagram of the larger Amsterdam metropolitan area. One could actually use the analogy of the metabolism, a framework for modeling complex territorial systems, material and energy flows as if the territory were an ecosystem. It helps us to study the dynamics of a territory in relation to scarcity, carrying capacity and conservation of mass and energy. Now as we have seen in the example of the Amsterdam metropolitan area shown before, our actual in economy can be framed largely as linear. The prevalence of reuse within the economic models is increasing, but right now there is a little that can be truthfully considered circular. So what distinguishes a linear economy from a circular one? A linear economy converts natural resources into waste via production. Within this process natural capital is removed from the environment and by pollution or waste the value of the natural resources reduced. In a circular economy however, there will be no loss of value and the net effect on the environment will be zero or even positive. This is also called a regenerative overall effect. In 2008 the Alamacast Foundation introduced the so-called butterfly scheme to explain a circular economy with biological loops related to renewable resources and technical loops related to finite resources. The former achieves circularity through cascading, the latter through the so-called our latter approach. Concepts integral through the loops include regenerating or substituting materials, but also virtualizing material change and or restoring all. This of course with the aim to minimize systemic leakage and negative externalities. Reality unfortunately unfortunately teaches us that there is still landfill and waste that is incinerated. The so-called low value recovery in this case of energy. Actually overall only about 9% of all global material resources at the moment are recycled annually. This is also referred as the circularity gap. In the previous example of the Amsterdam metropolitan area the largest share of material resources and its use is associated with housing infrastructure followed by nutrition and mobility. This brings us to the need for systemic thinking and the inclusion of social equity. For instance look here at these graphs. The first graph to the left shows environmental load related to energy resources and the second includes jails in the scope and a more nuanced image arises. Where we all thought that the desired move from left to right in the left image tells a clear story once adding special metals in the right the overall assessment could be different. And this is why circularity requires a systemic thinking approach. And this means also the importance of including a social equity perspective such as elaborated for instance in the concept of donut economics. If you apply this further to the field of spatial planning and development it could be summarized as suggested by metabolic as the seven pillars of circularity. Materials in economy are psychotic continuous high value. Water is extracted at the sustainable rates and resource recovery is maximized. By the first these structurally supported and enhanced human society and culture are preserved human activities generate value in measures beyond just financial and the health and wellbeing of humans and other species is structurally supported. And all of these of course from the perspective of equity resilience and transparency. This brings us at the final part of this video related to circular strategies. Value change and how the notion of multiscalarity and how the regional scale is crucial. Remember the two value change in the circular economy butterflies scheme of the Anonakaa Foundation? It comes down to either retaining value in the technical material chain in the case of non renewable resources or in the case of renewable resources retaining value in the biological material chain. However the ultimate goal should always be to try replacing non renewable with renewable resources as shown in this overview of both value change. The step to approach to be applied to achieve an optimized cascading of these values is also known as the R-ledder with 10 circularity strategies from linear to circular. These 10 circularity strategies can be clustered into three groups namely narrowing loop which decrease resource use, slowing loops which extend product life and closing loops which recycle materials. And eventually can be translated into 10 design strategies which focus on resource reduction for narrowing loops, durability and standardization for slowing loops and recycling in closing loops as seen in this picture. Which brings us at the importance of a regional approach within a multiscalar perspective. As we have seen circular strategies require a systemic perspective. This is made even more critical by the fact that an increase of scale often results in increased complexity and interdisciplinarity. And it leads to solutions that cannot be found considered or assessed at a micro-mason or macro scale in an isolation. They are all interconnected. To explain this I use here the example related to urbanization and agriculture. We can look at the effects of urbanization and resources required to feed urbanized societies which all together result in cascading effects of land and encroachment. In the end this reaches the limits of our global terrestrial ecosystems as defined by our planetary boundaries. And this is also known as the unfair teleconnexions impact on the earth system which makes it impossible to realize a circular society with regards to food. To overcome this it would be necessary to respond with global advisory planning processes which align with regional planning, preserve urban crop lengths and maintain and restore soil quality through protecting and facilitating movements in the regional urban and period of agriculture and with robust common property rights. This same regional policies could then further enable building circular economies and food security and building it locally. In such an approach within the example given a multitude of recovery pathways as shown here achieve circularity. Acting on such leverage points to curb urban encroachment may simultaneously balance import dependencies and decrease national demands that drive unfair and environmentally destructive teleconnexions. It highlights how crucial regional policies are in the aim to achieve an overall sustainable circularity which can also support the ability and equity, moving beyond that of just financial values. you" "Circular Economy - What is Remanufacturing?","https://www.youtube.com/watch?v=kjV9pZt3vh4","We will now discuss what re-manufacturing means. Re-manufacturing means returning a used product, re-making the product like new, or even better than new, and selling it again to a customer. The main idea of re-manufacturing is to keep as much of the value in the product or component as possible. The value is in, for example, the materials, energy, and transportation. Lower materials and energy, juice, helps the planet too. Leading thinkers are proposing that re-manufacturing is a good way to reduce products. This can be seen in the waste hierarchy, for example used by the European Commission. It is important, though, to check that the re-manufacturing of a product or component really does lead to improved sustainability. Environmental burdens from transportation or processing has to be taken into account when choosing which product or how to re-manufacture. Also, there can be a risk that the product or component contains has a substance, which may be even forbidden today. From a company perspective, re-manufacturing is a way to provide product or components with much lower cost in materials, energy, juice, and time in production. The re-manufacturing approach can provide improved profits. This value saving can be translated into very competitive offerings to customers. I have seen that many companies earn between two to five times as much as they do for the similar new products. In some cases, we also see that the main driver for re-manufacturing is that the product or component contains raw material which are insecure in supply. Companies producing re-manufactured products stay that they, their products, are as good as new and they provide full warranties and guarantees in the same way as for any other product. In some re-manufactured products, examples, all parts of the product are used again, but there are also examples where the core product is kept and new often improved components are added. Sometimes re-manufacturing also includes the upgrading of software or the design. There are a lot of examples of re-manufactured products and or components available on the market already today. Perhaps these products or components go through re-manufacturing just because they are already well designed for re-manufacturing, or sometimes the design was not aimed at helping the re-manufacturing activity, but money is made anyway. Now, let's look at some examples for you as an inspiration. Scania truck company based in Sweden re-manufactured components such as the gearbox. Here we see barbecues and radios at Svillitus, a company buying return products from other companies, re-manufacturing them and then selling on the open market. Re-manufacturing of furniture is a growing business. Here a chair where the cost for a re-manufactured chair compared to a new one is about half, but the quality is still the same. In re-go is a company buying mostly professional computer equipment, re-manufacturing it, often including upgrading and then selling it on the open market. Here you see Volvo cars where used parts are returned and re-manufactured. They are sold as spare parts. E-chris, a company gathering car components, re-manufacturing them and selling them as spare parts. The local bicycle repair shop, who buys high quality used bikes, re-manufactures them and sell them on the open market again. Re-manufacturing can be done by the company providing the original products such as Scania and Volvo, or by other companies such as E-chris working on the open market. For companies acting on a global market, transport of products or components can become an important issue, and that's some companies have their re-manufacturing sites spread globally. Some companies plan for take-back system in detail in order to minimize the cost and or environmental impact from transportation. The main idea with this most MOOC is to show you how to design products and components in a way that will make them better suited for re-manufacturing. That means that your focus should not be to re-manufactured current products, but encourage you to develop new products for a better future." "Recycling of Metal Scrap","https://www.youtube.com/watch?v=1pzrjEqIeWk","Hello, welcome to this lesson on metallurgical processes for the recycling of metals. Generally speaking, there are two major types of secondary resources for the metals recycling. The first one is metal scrap and the second one is metal containing waste or residues. Metal scrap will be the main topic in this video because metal containing waste or residues are from more specific sources and processes. Metal scrap is already in metallic form and it can take a relatively pure form when it is collected from product manufacturing. In this case, it is called new scrap or production scrap. Metal scrap can also be in a very contaminated and complex form. For example, as the scrap form scrap from end of life products such as obsolete electronics. In this case, it is called old scrap. New scrap formed that production manufacturing is also often specifically collected and remelded in the plant or sold to remelders for production of the same type and quality of metals or alloys. This is a common practice for aluminum and copper. Remelding is a simple physical process of heating, melting and casting. It is important to keep the metals from oxidation loss during remelding. The main operating cost is the energy which can be electrical or from fossil fuels such as oil or natural gas. Since no refining is required, the energy consumption and operational costs are relatively low. Old scrap such as scrap from the end of life products is contaminated by different materials. Copper scraps from cables, wires or automotive parts and electronic waste are typical examples of end of life scrap. They need refining using different refining technologies such as pyro, hydro and electrical methods. During the pyro metallurgical refining, only impure materials are not metallic contaminants are removed. Examples of hydro and pyro metallurgical plants in Europe are humicore, new boliden and arubis. If a metal scrap contains a lot of other metals and is contaminated, hydro and electrical recycling are often used. Hydro metallurgical recycling can be flexible and robust, however, from an energy consumption perspective, is not always favorable. Printed circuit boards, boards from E-waste can be treated, hydro metallurgical through acid or alkaline leaching. In this case, an oxidizing agent may be needed, for example, oxygen or for acidic dissolution of copper. Each copper will be purified through a solvent extraction. The copper will be finally reduced to a pure metallic metal through electrolysis. Next to old and new scrap, we have the metal containing waste or residues. The metals in these materials are usually in an oxidized state or in a form of various compounds. Examples are tailings from mining and mineral processing operations, slacks and residues from foundries, as well as fluid dust of metallurgical processes, are waste luts and other residues from chemical and other process industries. Metal containing waste and residues can be smelted together with ore and concentrates and smelters. Depending on the state of metals being oxidized or not, and the type of metals, also combinations of pyro and hydro metallurgy are used. The prototype of metal containing waste and residue, the pyro or hydro metallurgical process, is ore combinations must be fine-tuned. So, concluding from metal production from recycled sources, the distinction is made between metal scrap and metal containing waste or residues. The relatively pure metal scrap from production processes is called a new scrap, and is better directly remelted to a new metal or melted and refined to a pure metal or aloise without unnecessary oxidation and reduction. Old scrap, such as scrap from iron and of life products, is quite complex scrap. This scrap can be refined via a pyro, hydro and electrical metallurgical methods, as discussed in the video before. For metal containing waste and residues, both pyro and hydro metallurgical technologies are used. It's combinations depending on the type of feed. Some critical metals are not anomaly produced as byproducts are co-products of the regular materials. They are used in products, it's generally in trace amounts at a low concentrations. Therefore, recycling of these critical metals from end-of-life products is much more challenging than other metals. For this, more efficient and cost-effective extraction and refining technologies are needed in the future." "Analyses to Approach Inclusion and Segregation on the City Scale","https://www.youtube.com/watch?v=jJDyzQVmTtk","Welcome to the introduction of spatial analysis to evaluate social spatial segregation and the accessibility of foundational services on the city scale. The design of the urban blend affords different ways of people and communities living together. To approach inclusion and segregation on the city scale, we will describe four analysis. The spatial connectivity between districts, the borders that separate neighborhoods, the presence of local mixed-use centers and the distinction of neighborhoods by types of housing. Afterwards, these physical analysis will be combined and related with the map of income distribution as an indicator for understanding the socioeconomic segregation in the city. We take a closer look now at the city of Amsterdam. Currently home to about 875,000 inhabitants with more than half of them having migrated there. Amsterdam is the core of the metropolitan region with about 2.5 million inhabitants embedded between the north and the island lake in the east. The city grew concentricarily. And currently, it's distinguished by eight districts further subdivided into different neighborhoods. As you could see in the last slides, Amsterdam grew in patches. One of the largest extensions, the general extension plan from 1935, was planned by the urban designers, Pan-Easteren and Van Lohausen. They were first appointed urban design professors at Utah and as such are predecessors of our urban design section in the architecture faculty. In their approach, the emphasis of the relevance of urban design as a multi-scalor discipline. For the extension of Amsterdam, the employed strategy of connecting main streets, the related old city of Amsterdam with the new extension areas. In continuing the main city streets, the old and the new city districts could be connected well. Let us now have a closer look at what makes a main street. We look for long continuous streets, mostly historically grown, connecting two centers in the urban plan. This street does not only act as a connector between these streets, but also is a connector between the areas on both sides of the street. A core quality thereby is that pedestrians can still cross the street. Such main streets are the first spatial component that contributes to inclusive city design. The second spatial component we are looking at are urban borders. Barrier forality can lead to exclusion. It is therefore important to understand what spatial design or situation can lead to the development of borders. Borders are spatial elements that separate areas. Separation develops when vocability and reachability between areas is interrupted. Let us have a look again at the city map of Amsterdam. Borders can be formed by architectural elements like walls, but also larger elements like rivers, transport infrastructure such as beacrues or train tracks. Summing in, we can identify and draw the main elements that form borders between districts. For example, canals, train tracks, highways or large streets that interrupt the natural flow of pedestrian movement. In this way, we identify places that might need a spatial intervention to connect districts better, such interventions reduce the effect of the element as a border. Interventions could be, for example, to downgrade streets, transform a highway or car, traffic oriented street into a walk up street, establishing new connecting main roads between districts or built bridges across rivers and canals. You see that this type of intervention is usually a major infrastructure change, and thus requires long term planning and related patch up." "Introduction to Urban Growth","https://www.youtube.com/watch?v=-1FAzCLN1v8","In this video, I will give you an introduction to this week's challenge, Urban Growth. And was shortly addressed how this challenge is related to the different themes and videos of this week. Urban Growth is a challenging topic, as it has negative but also positive aspects with challenging and the feeling urbanized environment as a result. When we take Nastronauts view to the world by night, like for instance here looking to the north western part of Europe, you can see clearly the illuminated, polycentric and interconnected patterns of urbanization in this part of our world. According to the United Nations, the world population is expected to increase from 7 billion to over 9.3 billion by 2050. This is a 40% increase in less than 40 years. At the same time, the number of urban dwellers is expected to rise from 3.6 to 6.3 billion. To look at this trend, in another way, the global urban population is increasing by 60 to 70 million people, or the entire population of countries like for instance France or the United Kingdom every single month. Or to put it differently again, every day at least 200,000 people move to cities or are born in them. Equivalent to populating a city the size of Amsterdam every 4 days. If you look at the ranking with the largest urban areas in the world, you can see in the top 20 that almost all of them are situated in developing countries. Actually, 95% of urban growth will be in these developing countries. While this also includes that an expected 3 billion people will live by 2050 in informal or so-called non-plant urban settlements. So it is a misunderstanding that urbanization mainly results in high rise, dense and new urban environments. In fact, mega cities have over 10 million inhabitants only count for approximately 8% of urbanization. At the same time, the largest share still concerns mid-sized to large cities, urban sprawl and rural areas. As you can see in the pie chart here shown to the right. In the pie on the left, you can see that Asia still includes the largest share of urbanized areas of half a million inhabitants or more. It is however important to be aware that the continent of Africa is the fastest urbanizing at this moment. One of the negative impacts of this worldwide urbanization process is that it leads to an increased pressure on natural areas. Cities have a strong dependency on their hinterlands and therefore put significant claims on flentues, necessary to supply essential needs of urban dwellers such as food, water, energy and the management of its waste. Humans have come to so thoroughly dominate Earth's biological and natural systems that some scientists have gone so far as to claim that after the place to scene and how holocene, we have now entered an entirely new geological epoch called the Anthroprocin, or the Age of Man. In fact, cities of the 21st century will so much power that it can be concluded that there often isn't anymore a leading national economy, but that there is a network of metropolitan economies where country's talent, creativity and industry are concentrated. This has led some experts to envisage a future of powerful city states, rather than nation states, that drive the world economy. Within this context, this week's challenge of urban growth and transformation will be addressed from the different themes defined. The first theme that we'll be reflected upon is that of the effects of urbanization to the shape and structure of cities. Key concepts like Mao-Loucentric and Pauli-Centric metropolitan regions will be explained, but also what different approaches or concepts to urban shape and structure exist. Like that of the compact city, as shown here, for instance, in the design of the Great City, a new town, a new city near Chengdu in China. Regarding such concepts of the compact city, you will learn that urban shape, surfaces and the infrastructures are highly integrated and connected, often leading to multi-layered and complex environments. But regarding this theme, you will also learn about other concepts that take a different approach. Like for instance, the well-known example of Mastar City, a new town aqueous city in the United Arab Emirates. Similar to the previous example, it also can be considered an approach based on the compact city concept, with integrated solutions. The difference however lies in another urban morphology, or built up open space ratio, often indicated by the so-called FSI and GSI. The solutions that will be explained in detail within this theme of shape and structure as well this week. The result in case of Mastar City is that its main shape and structure can be considered as a compact, though low-rise built-up environments, with a maximized roof surface to create room for solar panels and other surfaces. Surface is necessary to achieve the ambition to become the first zero carbon city in the world. In this, relates directly to the second theme of focus in relation to this week's challenge, the urban surfaces and infrastructures. This involves energy and pressure, sewerage and drinking water infrastructure to gather often framed as sanitation infrastructure. But it also involves transportation to which particular attention will be given. As this is a crucial aspect of urban environments and growth or transformation. As you can see, in the extreme case of Mastar City, transportation can become highly innovative, with self-driving cars and separated non-accessible streets. For the time being however, the largest part of challenges will concern existing urban environments, and only to a limited extent new towns like Mastar City. The challenges related to transport in existing urban environments concern space conflicts and often are related to livability. Sensing and real-time feedback together with strategic planning are powerful tools. In fact, the pain is in the change, not in the outcome. In the images shown here, we see an overview regarding the city of Amsterdam, where the effects of measures to improve accessibility and livability by reducing car use and increase more sustainable alternatives, like for instance bicycle use, are monitored and shown. Also, the connection of these urban surfaces and infrastructures, with new mobility concepts and energy generation, provides new opportunities. Like shown here, in the concept to use electrical vehicles, so-called EVs, for a temporary storage of electricity generated with renewables on top of buildings. One step further, either, is the integration of these services into new concepts of infrastructures and buildings, like shown here, the Dutch example of a solar bike road, which generates electricity. When we zoom out, it shows us that at the scale of the city, such connection of supply and demand, provides all kinds of insights and potential solutions to improve city sustainability. By considering and matching, a central flows, like energy, water, nutrients and waste, the so-called urban metabolism can be determined and improved. This urban metabolism is an important notion that also will be explained more in detail within this week's theme of urban surfaces and infrastructures. This, on its turn, connects directly to the third theme, we will address in relation to the challenge of urbanization, which is the natural resources. Due to urban growth, globally rising middle-class and new developments, like ICT, with respect to natural resources, the role of cities is often challenging due to their larger environmental impact and socio-economic change. But it also includes significant potentials for solutions, as cities, for instance, generate the largest share of global GDP, while being more effective in energy use. In this week's video, on the theme of natural resources, the relation between resources as use efficiency, self-sufficiency and urban density will be explained. The fourth theme, addressed, concerns that of livability and urban living. Regarding the social-firsts of the physical environment will be explained. But also places and communities will be addressed and to understand that a neighborhood is not a community per se. While aspects of the street-level design and community building are key to livable urban environments, or even conditional to their sustainability. Livability is also strongly related to the notion of equity. As you can see in this world map, in particular developing urban regions in the world also concern the places where the average income still is to be considered low or medium-level. This combination of urban growth and poverty puts huge pressure on existing urban environments, and eventually could even lead to urban decline or unlivable environments. But even in existing and positively developing urban environments, such urban growth puts serious pressure on their built-up areas, like shown here, for instance, for the case of London. There are limits to the available space and also limits to systems of transport like shown here, for the example in Amsterdam, which makes new smart concepts or innovations or innovative policies and governance necessary to find these solutions. And this is the last theme addressed this week in relation to urban growth, the theme of policy and governance. In fact, from a policy or governance perspective, it is better to speak of urban transformation instead of growth, as there will be limits to growth. And regarding this theme and how to adjust policies, strategies based on dynamic control and development, qualitative planning of conditions, transformation and strategic interventions will be key in the transformation towards urban environments and structures that include change and are livable, sustainable and just and simply exciting environments to live in." "Local Climate and Building Design","https://www.youtube.com/watch?v=X6un6y-BwIk","Good day. Since we now understand the difference between power and energy and how to determine the energy user you're building, we will start to look at step 0 of the new step strategy. Research. In this lesson I will focus on the climate in which you're building is situated and what that means for our building design. Universe, some moon, greenhouse effect, oceans, land, ice caps, all factors that have created a planet with different climate zones and ocean currents that transport heat and call across the planet. Here you see different climate zones, as classified by cup and gaga. Every colour depicts a different climate. Let's have a look at the best known climate types. These are the data of a tropical climate such as middle Africa. If you look at the graph well you will see that the temperature is always high and that there is a wet and relatively dry season. What you can see is the high average humidity. The next climate is data for desert, such as in northern Africa. Hardly any rain and a difference between winter and summer, what you can see here is the diagonal differences, which in such a hot and arid climate can differ a lot. It can be freezing at night and sizzling during the day. Here we see the temperate climate, such as in the Netherlands. Relatively cool temperatures, not too much difference between summer and winter and precipitation all year round, but not as much as in a tropical climate. This is a continental climate, as in Russia. It resembles the temperature climate but it's more extreme in summer and winter and there is also more seasonal difference in precipitation. And finally we see the polar climate, with the lowest temperatures of course, but with an enormous difference between winter and summer. There is very little precipitation mostly slow, snow. Now, if we look at all these differences, it is strange that buildings look the same everywhere, especially if you look at offices. Just seeing these images, there are several features you can understand which. First, the location of these offices and second, the orientation of the building. All elevations are the same. Understanding climatic differences, designing these buildings and keeping them comfortable, is only possible thanks to the technical building services, which can correct the mistakes of the architect. In one climate, they help to cool, in another to heat and they can change the humidity. This however costs a lot of energy. So if we want to save energy, the first thing we have to do is design for local circumstances. In that sense, locality means that the building fits the characteristics and the climate of the site. Other terms that reflect the sprints of all are genius, locality, the spirit of the place, biochlamatic design, buildings that are adapted to the local climate and vernacular architecture, architecture that has historically evolved due to local availability and lack of resources, when people had to solve things differently. For instance, in the tropics, where temperatures are always high, vernacular architecture makes optimal use of the cooling capacity of wind by creating air permeable façades and floors. But roofs need to be suited to withstand heavy rainfall, and buildings should be put on poles to keep our animals and also to create a maximum air currents. In Descend area, we see narrow streets for shading, houses with a lot of masks that temple diurnal differences, white plastic services to reflect the sun and flat roofs that can cool down under a clear night sky. In cloth, cold climates, buildings need to be well insulated to preserve the heat and have steep, pitch it roofs to keep at the rain and carry the burden of snow. So, every building demands its own best local situation and solution, but it's designed can be approached similarly, which we will try to teach you. Let's see which climate aspects you need to know before you can design a sustainable building. Temperature, it's good to know the mean temperature, but also the diurnal differences. Humidity. Both the absolute and the relative humidity are important to understand what you can do with heating and cooling. Sun, you need to understand the course of the sun's fluidity sky as well as solar intensity. Wind. A climate rose, rose tells you the predominant winds and if you understand the top of a graphical situation you can tell if these winds are mainly wet or dry, warm or cold. Presuppetation. Annual values and seasonal differences are important to know. Soil. The underground and its geology, defined the options to use soil energy. And finally, surroundings, mountains, trees and buildings can cast shadows onto your building, and there may be different local energy potentials you can utilize. We can go into all these aspects, but let's have a look at the sun. This is the sun chart of Delft, my city. At 52 degrees northern latitude. Every place on earth has a different sun chart. I will explain this particular chart, step-by-step. First, this middle line depicts the south. Most people know that the sun course is described from east to west. This happens exactly around the 21st of March and the 21st of September. From an azimuth of 90 degrees of the north board east to 270 degrees west. The sun reaches its summit at 38 degrees altitude at midday. And if you look at the little house, you see that the insulation will be at this angle. In summer, on 21st June, exactly, this is the track of the sun through the sky. From around 50 degrees of azimuth north east, 2,300 degrees northwest. At midday, the sun reaches the highest altitude of the year, 63 degrees. And it reaches the house at this angle. In wintertime, 21st of December, the sun only peeps above the horizon shortly. From 130 degrees south east to 230 degrees southwest. And the maximum altitude of the sun on this very day is only 15 degrees above the horizon. Here you see that incident on the house. Now, I want to ask you, where do you think does the sun hang in summer at 12 o'clock on your watch? It's here. Where do I? It only reaches south and this time, 20 minutes to 2. Why? Because of 2 phenomena. First, delft is about 40 minutes behind Berlin, which is the demarcation line for the central European time and second. It's summer. So we live one hour later due to the summer clock shift. And where is the sun in winter? At 12 o'clock on your watch? That's here. It reaches pure south at 20 minutes to 1 because we don't have a summer clock anymore. Now, why is this important to know? Well, if you know where the sun is at which moment of the day, you will be able to design the right spatial configuration, glazing and shading for specific functions in the building. It starts with the sun, so always put a north arrow on your drawings. You cannot design an energy efficient building. If you don't know how the sun turns around or over it, okay, that's enough for this lesson. If you want to know more of the climate in your own region, I can recommend to these sources for you. Your own national meteorological agency, a program called Climate Coal's Ultimate, free to download and some charge a chart provided for free by the or University of Oregon. Good luck. I'll see you next time when we will discuss smart biochlimatic design." "TU Delft ES Microcredentials for professionals pilot","https://www.youtube.com/watch?v=ljZGORCM2mw","Microcrudentials are proof of a learning outcome or result after a short learning experience. Think of education that is smaller than a bachelor's or master's degree, for example, courses or modules. Microcrudentials stand for quality and are supported by quality assurance according to agreed standards. Microcrudentials are issued by accredited higher education institutions and allow them to provide students with acknowledged digital certificates. In the future, microcrudentials will be recognized by all national higher education institutions and they'll be registered in the national register. In the microcrudentials pilot, we're experimenting with higher education for professionals. The educational units, orbiting three and thirty-e-c, or credits. Institutions participating in the pilot have agreed to acknowledge each other's microcrudentials. Additionally, within the pilot, we're investigating the value of microcrudentials for a broad group of learners, the potential to combine microcrudentials into larger qualifications and the possibilities for legal embedding of microcrudentials in Dutch legislation. The national register will record who has obtained which microcrudential. This makes the achievement of learning outcomes traceable and verifiable. This way microcrudentials are of added value for professionals as they receive recognized certificates and can thus easily specialise upscale or retrain with the help of higher education. This helps them in the rapidly changing professional context of today and with their own development ambitions." "TU Delft ES National microcredentials pilot","https://www.youtube.com/watch?v=C5zulgfO7rY","Microcrudentials are proof of a learning outcome or result after a short learning experience. Think of education that is smaller than a bachelor's or master's degree, for example, courses or modules. Microcrudentials stand for quality and are supported by quality assurance according to agreed standards. Microcrudentials are issued by accredited higher education institutions and allow them to provide students with acknowledged digital certificates. In the future, microcrudentials will be recognized by all national higher education institutions and they'll be registered in the national register. In the microcrudentials pilot, we're experimenting with higher education for professionals. The educational units, orbiting three and thirty-e-c, or credits. Institutions participating in the pilot have agreed to acknowledge each other's microcrudentials. Additionally, within the pilot, we're investigating the value of microcrudentials for a broad group of learners, the potential to combine microcrudentials into larger qualifications and the possibilities for legal embedding of microcrudentials in Dutch legislation. The national register will record who has obtained which microcrudential. This makes the achievement of learning outcomes traceable and verifiable. This way microcrudentials are of added value for professionals as they receive recognized certificates and can thus easily specialise upscale or retrain with the help of higher education. This helps them in the rapidly changing professional context of today and with their own development ambitions. Microcrudentials offer higher education institutions the opportunity to broaden their educational offering with mutually recognized education specifically for professionals. By serving professionals even better, institutions together ensure that lifelong learning in the Netherlands is given an impulse. As microcrudentials are verifiable, they also offer value to employers. Microcrudentials guarantee the quality of shorter educational units. This means employers are in short that courses have been designed in such a way that learning outcomes are actually achieved. The education is of high quality and meets European quality standards. Would you like to know more about microcrudentials? Go to for snellingspron.nl slash EN slash pilot microcrudentials." "Propeller Integration in Propeller Engines","https://www.youtube.com/watch?v=gUUHwZNTTDw","Hello, my name is Stone Sinacher and I'm an assistant professor in the flight performance and propulsion section at the Faculty of Aerospace Engineering. In this video we will discuss opportunities for improved propulsion integration for more sustainable aircraft. The previous lectures already discussed several novel aircraft concepts with increased energy efficiency compared to conventional aircraft configurations. In order to achieve such an increase in energy efficiency, we need to improve performance on several aspects. In terms of the propulsion system, we require highly efficient proposers and highly efficient integration of these proposers with the airframe. Moreover, we may need to fly lower and slower to minimize the environmental impact of aircraft's operation. Finally, of course, we also need low weights and high aerodynamic efficiency. At this point, we might reach a surprising conclusion. Actually, propeller propulsion may be a great match for these requirements. But wait, are in propellers all-fashions? propellers were used on the first powered flight in 1903. So surely we should have come out with something better than now, right? Well, in fact, we did not. Because propellers are the most efficient aircraft propulsion system available. With propellers, our efficiency is reaching up to 90%. Another benefit is that propellers combine very well with electric propulsion systems. In this way, local emissions can be eliminated. So electric motors can be scaled relatively easily to a smaller size without significant performance penalty. This enables a more flexible distribution of engines over the airframe than what we're used to with gas turbines. Therefore, it brings an opportunity for novel aircraft designs with smart propeller integration solutions. The promise of this smart integration of the propellers with the airframe has led to a large number of design concepts with different integration strategies. What these designs have in common is that the propellers are distributed over the airframe and positioned at unconventional locations. Examples include propellers at the wing tips, propellers mounted to the horizontal tilt plane, ducted rotors integrated with the empanage, boundary layer ingesting propellers at the rear fuselage, and also distributed propellers at the wing leading edge or more aft along the wing court. Each of these concepts features an enhanced aerodynamic performance due to beneficial aerodynamic interactions between the propellers and the airframe. Now let's consider two examples in more detail. The first is the wing tip mounted propeller. As you have seen before in the course, the wing tip vortex is responsible for lift induced track, which amounts to about 30 to 50% of the total aircraft drag increase. By positioning the propeller at the wing tip, we modify the flow fields experienced by the wing. Behind the propeller in the propeller's upstream, there is an increased actual velocity compared to the free stream, while there is also a swirling motion. The wing therefore will experience a higher envelope velocity, which leads to higher lift. Due to the swirl, the propeller will induce an upwash on the wing, which leads to a forward tilting of the lift vector, and does a reduction of the lift induced track. As a result, track reductions increase of up to 15% having measured compared to conventional propeller wing layout. For this to work, the propeller needs to rotate an opposite direction to the wing to vortex. Another way to achieve improved wing performance is by distributing a series of propellers along the wing leading edge. This leads to lift augmentation due to the increased actual velocity in the propeller's upstream. For a given lift requirements, this can be exploited in two different ways. One option is to reduce the wing size while maintaining the free stream velocity. This leads to lower friction drag. The other option is to reduce the free stream velocity while maintaining the wing size. This leads to lower noise emissions and shorter take of a landing runs. Because the propellers now are so close together, they will interact with each other. This leads to a modification of the stiff stream characteristics and also the noise emissions. One technique to control the resulting adverse effects is by synchronizing the propellers. With such technique, the as a mootle angle between the blades of the adjacent propellers is optimized to minimize adverse interactions. Of course, the performance benefits that we just discussed do not come without challenges. The propeller's upstream flow field is actually very complex and includes time dependent features. Therefore, the resulting aerodynamic interactions on the airframe will be unstead. Moreover, because of the close coupling between the propellers and the airframe, the propeller blades will experience changes in loading during the rotation. This leads to additional noise and vibrations. The sides are also challenges at aircraft integration level that require attention, such as complexity of the systems, weight increases, impact on stability and control, and also certification challenges. In this lecture, we have seen that propeller propulsion is an attractive option to achieve more sustainable aviation. By integrating the propellers in a smart way with the airframe, performance benefits can be obtained that will be key to achieving more efficient aircraft designs. This does not come without challenges, which are the scope of ongoing research. If we manage to tackle these challenges, we can make novel aircraft designs with improved propeller integration and reality, and thereby make aviation more sustainable." "Box Wing Aircraft","https://www.youtube.com/watch?v=bSXG52R7-hQ","Hello and welcome to this video lecture on novel aircraft configurations. My name is Carmen Evarrialle and I am assistant professor of flight mechanics at the section of flight performance and propulsion, here at the Faculty of Aerospace Engineering of the TUDF. In this video we are going to look at the unique aerodynamic properties of the boxing, and we are going to try to understand why it is an interesting to consider a boxing aircraft as a potential solution towards the more sustainable aviation in the future. First of all what is a boxing and why is it called like that? A boxing is a closed wing system consisting of two main wings connected to each other by side panels or side wings at a tip. It is called like this because if you look at it from the front or from the back, the profile of the wing looks like a box or more in general like a closed curve or polygon. Depending on the particular application and category of the aircraft there are many possible designs for the geometry of the boxing. And the same is true about the possibilities to integrate it with the fuselage and vertical tail in order to make a proper aircraft configuration. The examples as you see in the pictures have actually already flown in the real world for light and ultra light applications but there are many more that are currently being studied in research. For flight and transonic conditions which is the current standard for transport jets, all wings of the boxing have to be given a certain sweeping angle and as a consequence a significant horizontal distance which we call stagger. This particular aircraft configuration with such a staggered and swept boxing a wide body fuselage and twin vertical tails has been recently referred to as the Pronto plane. And this is from the name of one of the fathers of modern aerodynamics Ludwig Pronto who was the first one to discover the important properties of the boxing and in 1924 he called it the best wing system. Why did he call it like that? Because no matter what the particular shape or design of the boxing the boxing achieves the minimum induced drag for a given wingspan and weight. Now you may already know that induced drag is related to the creation of lift and as such it is associated with the creation of vertices behind the wing. Now the intensity of these vertices can be reduced by either increasing the wingspan as in the case of a trust-braised wing or by introducing winglets as in the case of many existing aircraft already in the real world. In the case of the Pronto plane instead you can imagine that the rear wing acts as a huge winglet for the front wing and also vice versa. In this way maybe you can convince yourself that the intensity of the vertices behind the wing is minimum and so is also induced drag. Now how can we put this property to good use? Consider the two most numerous aircraft configurations currently in service. The Airbus A320 family and the Boeing 737 family. They are both characterized by a wingspan of approximately 36 meters and can carry about 150 passengers in a two-class cabin layout. Now if you work to design an airplane based on the boxing concept but with the same wingspan you would first of all retain the possibility to use the same ground infrastructure already available in all airports of the world. And I'm talking about passenger gates, hangards or maintenance facilities for example. Note that this would not be true for the trust-braised wing which needs a larger wingspan to achieve improvements in aerodynamic efficiency. And it's quite a relevant aspect in our modern world where population and travel demand keep growing and growing was space for new facilities becomes less and less available. Secondly thanks to the improved aerodynamic efficiency the boxing would make it possible to either transport the same weight of the A320 or B737 with less drag and ends consuming less fuel or make it possible to carry more weight with the same drag of the A320 and B737 hence being able to transport more passengers per flight and overall needing less flights from point to point. On some preliminary studies comparing the flight performance of a reference aircraft resembling the A3, the Airbus A320 also equipped with new generation engines and a pronto plane with a 36-meter wingspan it can be seen that these scenarios are actually not so unrealistic. For various short and medium mission ranges from 2000 km to about 6000 km and for two different flight strategies it can be seen that the A320 consumes about 17.5 grams of fuel per passengers per kilometer with 150 passengers. While the pronto plane consumes between 14 and a half and 16 and a half grams of fuel per passenger per kilometer with 300 and 8 passengers. So in other words the pronto plane uses less fuel per passenger to move more passengers on a short and medium range route. And hence can be defined as a more efficient means of transportation. Another aspect to be considered stems from the fact that the geometry of the staggered boxwing makes it possible to install multiple redundant control surfaces all over the two main wings. By coordinating the deflection of this control surfaces it is then possible to perform any maneuver in infinite different ways. And also to choose an optimal way to achieve some desired performance. For example, control surfaces could be coordinated to achieve directly control which makes it possible to change the aircraft lift without any changing the angle of attack. This could be especially useful to make the aircraft feel more precise and agile to the pilot commands and also to make it react faster to external disturbances. Directly to control reduce the vibrational load felt by passengers and structures in a turbulent airfield for example. Resulting in more comfort and board and overall safer flight. Also it could allow precise control over the flight pathingle during the scent and landing phases, allowing for steeper continuous descents and lowering the noise footprint of landing close to urban areas. In summary, what have we learned in this video? The boxwing is a closed wing system which achieves minimum induced drag regardless of its particular shape and design. By integrating the boxwing geometry with a wide body fuselage the frontal plane could manage to transport approximately double the number of passengers of the 820 and B737 while consuming less fuel per passenger and using the same ground infrastructure. Finally, redundant control surfaces on both wings of a staggered boxwing can be coordinated to achieve directly control which results in more comfort and safety in flight and in a lower noise footprint during the scent and landing. Of course there are still many challenges to be faced for the boxwing and the front plane, but research is ongoing to explore this novel configuration and its practical operation as a potential solution towards more sustainable aviation." "Aircraft Energy Carriers - Can Solar Powered Aircraft Fly?","https://www.youtube.com/watch?v=TBOsC1_JWsE","Hello viewers and welcome back to another lecture in this MOOCs on Sustainable Aviation. So today we are going to talk about something which is very important and is a hot topic nowadays and that is aircraft energy sources. So on this chart you'll see some of the popular energy carriers or any sources like hydrogen, biofu, synthetic carocene, then batteries, liquefined natural gas and liquid hydrogen. And what you see is that each and there are several criteria, say for example energy density, emissions, availability, infrastructure, compatibility and so on and so forth. And what you see is that none of the energy sources has only advantages. Every energy source has this advantage and disadvantage. And that's why sometimes it is better to combine the two. And that brings me to this slide which is on hybrid architecture. This is a slide from NASA where this show different architect is possible for hybrid engines where you combine liquid energy source like carocene or biofu or synthetic carocene with electric source like for example batteries or something else. And just like your car, the architecture can be full electric, two turbo electric or something in between like series or parallel and so on. This is a field of research where many people are working and you will see more developments in this in the coming years. The other way of hybridizing is to combine some of the other fields. Say for example on this slide is an engine that we investigated in one of our projects called as a head, a few years back, where we looked at combining hydrogen with biofu. So the first thing what you see here is an engine with two combustion chambers, the first combustion chamber uses hydrogen and the second combustion chamber uses biofu. So this is another way of combining different energy sources. This is also an area where a lot of research is happening. So now that we talked about energy source, so let's talk about a couple of mid-busting. So the first mid-busting is, okay, why can't we fly our aircraft with only solar energy? So what happens if we cover the aircraft with solar cells and that generates enough power to fly? Would that be so? So let's take a hypothetical example where we take an aircraft, say for example, something like a twin-high-layer aircraft and we cover it with solar cells and let's calculate the amount of energy that can be generated and what you see on this slide is that in some optimistic assumptions it can generate around 50 kilowatts of power or so. Now that is actually less than the amount of power that is required for the inflight entity in the system. Even the air conditioning system takes more power than this. And please remember that the non-proulsive power requirement in an aircraft is less than a percent as compared to the propulsive power which is in the order of tens of megawatts. So certainly on planet Earth, it will not be possible to fly with solar cells. The other mid-busting is the question, why can't we fly with nuclear energy? Well believe it or not but it has been tried both by United States as well as USSR or Naurashya. And the idea was to have these nuclear power aircraft, the pictures of which you see on the slide. Now that are several problems when you talk about nuclear propulsion, the main thing is that nuclear reactors do not so light, moreover they require a huge amount of shielding because nuclear radiation is very very dangerous. The other thing is that in the event, in an unlikely event of an crash, the nuclear radiation can leak out. And certainly we would not like to have a nuclear disaster like what has been the case with Fukushima for example. So what we can do is that we can indirectly fly an aircraft or nuclear energy. Say for example if we use nuclear energy to create hydrogen or synthetic ferocene and we can use hydrogen and synthetic ferocene created by nuclear energy to fly an aircraft and that might be something very interesting for the future and indirect nuclear power aviation. Thanks a lot for watching. See you next time." "Aircraft Lifecycle","https://www.youtube.com/watch?v=HQGXVTYyXAM","Hi, I am Patricia Parlefleet, guest lecturer at the Faculty of Aerospace Engineering at the Health University of Technology. And in this video, I will tell you about the environmental impact of each life cycle. Today not much information is available on the environmental impact of each separate life cycle phase. The information of this information is a daunting task because there are so many different impacts that each phase can have. For example, energy consumption for high-performance computing, for digital design and manufacturing, mining of raw materials leading to local pollution and health risks, emissions in the operations phase, and waste production at an aircraft end of life. These environmental effects can be quantified using a life cycle assessment. This is a standardized method of calculating the impact of a product on the environment. There are four types of impact categories. Environmental impacts, used resources, waste type and output flows. Environmental impacts are, of course, global warming potential that is represented in kilogram CO2 equivalent, as well as depletion of natural resources, human toxicity and air pollution. Used resources include energy resources such as coal or wind energy and use of fresh water. This type includes this post hazardous, non-hazardous and radioactive waste. Output flows are components for reuse materials for recycling, materials for energy recovery and waste incineration and landfill. The letter 2 being the least desirable options. If you want to know more about LCA for transport, please see the links under the additional reading tab. So, if you would make a qualitative estimation of the climate impact in CO2 equivalent for each life cycle phase as it is today, the results would look roughly like this. Now what do you see? Estimations of the operations phase impact on climate change range from 85% to 98% of the total life cycle impact. The rest comes mainly from the manufacturing phase, including the mining of raw materials. So rightfully, most solutions presented in this MOOC target the operations phase with new propulsion technologies and other types of fuels to reduce the emissions. When these solutions become successful, the above picture may change to this in the future. For example, in the year 2014. This picture shows a relatively larger contribution of the other life cycle phases, especially from manufacturing, if we do nothing. Previously, I introduced you to the life cycles of an aircraft. It was explained how many decades materials can take from idea to an aircraft's end of life waste, and now you know that environmental impacts can be quantified using a life cycle assessment. As you have seen today, an aircraft life cycle is mainly linear. Why do you think that is? Please describe your thoughts in the discussion forum below. 3, 2, 1, 0. Okay, right. It was mentioned at the beginning of the sequence where the design phase was described. In the current linear life cycle, the main drivers of the design process are weight, cost, performance, and safety. A linear life cycle is also often described as take, make, use, dispose. The result is that a lot of fossil resources go into an aircraft and emissions and waste come out. Environmental impact was and sometimes still is not part of the equation. So how can we change that? We believe that for sustainable aviation, we need to move to a circular approach. Which starts at, you guessed it, the design phase, life cycle design. Life cycle design is the environmentally sound design of products based on the whole life cycle, starting from exploitation and processing of raw materials via preproduction, production, distribution to use and returning materials back into the industrial cycles. The target is to reduce the environmental impact for the whole aircraft life cycle. This idea is represented by our following code of arms. Which we will use throughout this module. So let's move on with the first phase. Design! Bye!" "TU Delft-ES Microcredentials for professionals pilot","https://www.youtube.com/watch?v=ukfGgS2WSXI","Microcrudentials are proof of a learning outcome or result after a short learning experience. Think of education that is smaller than a bachelor's or master's degree, for example, courses or modules. Microcrudentials stand for quality and are supported by quality assurance according to agreed standards. Microcrudentials are issued by accredited higher education institutions and allow them to provide students with acknowledged digital certificates. In the future, microcrudentials will be recognized by all national higher education institutions and they'll be registered in the National Register. In the microcrudentials pilot, we're experimenting with higher education for professionals. The educational units, orbiting three and thirty-e-c, or credits. Institutions participating in the pilot have agreed to acknowledge each other's microcrudentials. Additionally, within the pilot, we're investigating the value of microcrudentials for a broad group of learners, the potential to combine microcrudentials into larger qualifications and the possibilities for legal embedding of microcrudentials in Dutch legislation. The National Register will record who has obtained which microcrudential. This makes the achievement of learning outcomes traceable and verifiable. This way microcrudentials are of added value for professionals as they receive recognized certificates and can thus easily specialise upscale or retrain with the help of higher education. This helps them in the rapidly changing professional context of today and with their own development ambitions." "TU Delft-ES National Pilot Microcredentials","https://www.youtube.com/watch?v=YB3n46D8ScI","Microcrudentials are proof of a learning outcome or result after a short learning experience. Think of education that is smaller than a bachelor's or master's degree, for example, courses or modules. Microcrudentials stand for quality and are supported by quality assurance according to agreed standards. Microcrudentials are issued by accredited higher education institutions and allow them to provide students with acknowledged digital certificates. In the future, microcrudentials will be recognized by all national higher education institutions and they'll be registered in the national register. In the microcrudentials pilot, we're experimenting with higher education for professionals. The educational units, orbiting three and thirty-e-c, or credits. Institutions participating in the pilot have agreed to acknowledge each other's microcrudentials. Additionally, within the pilot, we're investigating the value of microcrudentials for a broad group of learners, the potential to combine microcrudentials into larger qualifications and the possibilities for legal embedding of microcrudentials in Dutch legislation. The national register will record who has obtained which microcrudential. This makes the achievement of learning outcomes traceable and verifiable. This way microcrudentials are of added value for professionals as they receive recognized certificates and can thus easily specialise upscale or retrain with the help of higher education. This helps them in the rapidly changing professional context of today and with their own development ambitions. Microcrudentials offer higher education institutions the opportunity to broaden their educational offering with mutually recognized education specifically for professionals. By serving professionals even better, institutions together ensure that lifelong learning in the Netherlands is given an impulse. As microcrudentials are verifiable, they also offer value to employers. Microcrudentials guarantee the quality of shorter educational units. This means employers are in short that courses have been designed in such a way that learning outcomes are actually achieved. The education is of high quality and meets European quality standards." "Electric Cars Battery Production and Recycling","https://www.youtube.com/watch?v=MVX2hB4Cjt8","Welcome to this lecture in which I give an overview of some of the most important issues regarding the production and recycling of batteries. My name is Al-Qurhukstra and I'm an expert on electric mobility at the end-home university of technology. People often say that battery production is polluting the environment and that it's true. It's where much of this video is about, but it's good to stress that batteries are less bad than fuel. Let's take a card that drives 300,000 kilometers. Over's lifetime. It uses one liter of gasoline for every tank loaders. Over's lifetime it would require over 25,000 kilograms, not liters kilograms of oil, and it made over 75,000 kilograms of CO2. That's a lot. Electric vehicle with an average of, let's say, a 60-kilowatt battery would need 250 kilograms of battery material. That can be recycled. There's no emissions during the use phase and we can recycle the battery, so very, very big difference there. What we see is the global process of producing batteries. I've taken an illustration of Volkswagen. It's started with mining raw materials for the cut-out of a so-called M.C. or a nickel, Manganese, cobalt lithium batteries, also graphite for the end-out that lets forget it for the moment. These are the materials you need to mine. Then you must refine these materials. You have to produce the cut-out, produce the cell, assemble the entire pack, and then integrate it into the vehicle. I've taken this illustration because it clearly shows that getting the raw materials is the focus of many common factors right now. Here you see that Volkswagen basically promises its investors to move its focus to the start of the supply chain, including mining, refining and cut-out production. Something else, we know how to find enough material to make billions of large car batteries. The problem is scaling up production and doing so in a most sustainable way possible. The most talk is about coal bolt on lithium, and I just want to talk briefly about coal bolt. There is actually mostly about the falling child labor, and that's good. But that mainly occurs when the power is to the poor in the field state of Congo, mine cobalt often legally. So, of course we have to do better, but it's not accurate to say this is a problem of car batteries. First of all, because coal bolt is also used for many other purposes, such as refining oil by the way, and second, because this is most related to political situation in Congo. And by the way, coal bolt and nickel are cobalt, many of these batteries is being faced out, and that can even be replaced with the slightly heavier but longer lasting iron phosphate batteries. For the energy transition as a whole, the E.A.R. singles out copper, rare earths and lithium. But copper can almost universally be replaced by abundant elements like aluminum, and rare earths, they are called like that, but they're actually not that rare. And that could also be replaced by, yeah, in motors, for example, or the dynamos in windmills, to motors without magnets. So, that leaves lithium as the biggest problem child. We usually have enough lithium to last us a thousand years that's been true for over a hundred years. Currently, we have already found enough for about 10 billion electric cars. The United States has enough deposits supplied on demand for over a century. And in Europe, there's a lot of lithium among others than you crane, by the way. But the biggest known deposits in the world are in South America, mainly Bolivia, Argentina, and Chile, and Australia. Recently, Australia really stepped up because South America had problems scaling up, and Australia now produces over 15% of worldwide lithium. But what if production cannot scale up fast enough of, if there are questions about sustainability of lithium from, for example, South America? The first alternative could be to get lithium from geothermal brine, and the first experiments are going on with that, and looking very good. Another possibility is lithium from the ocean. There is 5,000 times more lithium in the ocean than we know how to final land. And now 4,50 trillion cars. And some research is think it can become economically viable in the shorter. When that fails, we could temporarily, for example, switch to sodium batteries, where there's really no shortage. Mining is our biggest problem in terms of availability and CO2 emissions, so I spent some time in the past video on that. Refining is basically the next and relatively simple and low-text step that can be quickly scaled up. You often hear that we get half of our better materials from China, but that's mostly because they do all the refining. And that's because they can do it very cheaply, and don't care about sustainability that much, but we could easily scale up elsewhere. It used to be that 90% of CO2 emissions from batteries were caused by just a step of cell manufacturing. But with new so-called giga factories, that balance has shifted dramatically because in large factories can produce cells much more efficiently. This picture from the 2021 Tesla Impact Report shows that only 23% of CO2 is now caused by the factory. This incident turned enables batteries to be produced with less than 75 kilograms of CO2 per kilowatt hour. And we expect cell production to become much more efficient still, so pretty soon running mines and refineries on low carbon electricity needs to be the priority because by that time, that's where over 90% of the battery emissions are, the factory will become less important. With that in mind, I, for example, find extraction of lithium from seawater using floating wind farms, a very enticing proposition, but I haven't seen any plans on that, so it's just a tip. My trip with the place of lithium, but it will still mean very little for the overall battery price. Before we get to recycling, I would like to point out that the biggest win we could possibly get is to buy a less and smaller cars. It sounds kind of simple, but it's not really contemplated that much. We recently did three articles based on a literature survey and agent-based simulation women are self-shared autonomous electric vehicles. You, in this case, would not own your own vehicle, but you could travel just as quickly and comfortably and would be at a fraction of the cost without having to worry about to park your car. We found this would probably lead to much more car travel because it was so cheap and easy, but still at much lower cost and require only 6% of resources. And it would take less road space, a hundred times less parking space, makes cities healthier and safer. I think this is the future we should create, not one in which every household owns a Ford 150 light name. And if you think an SUV is such a NASCAR to drive in, just think for a moment what it means for others in the city. Here on Mother demonstrates a number of kids that can hide in her blind spot. Smaller cars are vehicles and smaller vehicles are definitely safer for pedestrians and cyclists. So, that about production and how to reduce demand by sharing stuff. Now let's talk about sharing with future generations through the use of recycling. What we should understand first is that right now recycling batteries is relatively important. What people talk about is relatively important. In this picture you see a conservative estimate for the growth of electric vehicles. Now I will add the amount of scrap vehicles. Because cars are used 16 years on average and with electric vehicles it might even be longer because of the lack of maintenance. Makes them be used longer I think. It will take many years before the number of batteries available for reuse will be a significant percentage. And we're not even talking about second life then. That was quite much better because it makes the battery lost even longer. And a good example of that is a project in the Amsterdam Arena I visit recently. Where they do big shaving of the whole stadium with old Nissan lead batteries. Perfect example of second use. Regarding recycling I cannot go into details because the simply too many chemistries. And it's all in the nascent stages basically but I'll link to a recent overview in nature. And I'll show you there are really many recycling. Let's say pilot projects already. HVD's style all the former city of Tesla famously went from Tesla to starting is battery recycling company. And many also see a business case there. So it's not buying the sky thinking. By the way, lead asset batteries are recycled for over 98% in Western European countries. An emerald in US has a goal of 90% recycling in the US. And the EU even has a goal of 95% recycling for most battery materials. So I think that we will certainly see recycling of the vast majority of batteries once they become available in large quantities. So to wrap up we've seen that in a low carbon future electric vehicles replace 75 tons of carbon dioxide with 250 kilograms of batteries. We've seen that mining for resources is the biggest bottleneck by now. There was ample material that scaling up production is challenging and mining is also where the most CO2 emissions occur. We saw that there are many plants to build battery manufacturing plants. And these plants are quickly becoming so efficient that greenhouse gas emissions are less of a concern. I've cautioned that giving everybody a large car might not be the future to aspire to. And sharing reduced resources by 6% and recycling could call that number even further. That would make it possible to heal the earth. Is that a future I envision likely to come true? I don't know. But you know what they say. The best way to predict a future is to invent it. And it's also clear that this future would be comfortable and technologically possible. So I say, let's make it come true. And that's all. Thank you very much." "What's Inside an Electric Car?","https://www.youtube.com/watch?v=A3LMBFL5XO0","This lecture we will talk about the key parts of an electric vehicle and their function. We will then look into how all they operate together to create one of the most advanced vehicles on the planet. In this figure you can see the key parts of an electric car. First we have a charging port with the connector and cable. We have the high voltage extraction battery and the low voltage auxiliary battery. We have an electric motor and transmission system which are used for propulsion. And finally there are several power electronic converters that are used for battery charging for driving the motors and for regenerative braking. The schematic shows how the different components are connected to each other in the electric car. To explain what is inside an electric car let us follow the power flow direction and take a look at an electric car by identifying the key components. The first part we look at it is the charging port with charging connector and cable. The charging port together with the connector and cable allows the electric car to connect to an external power supply in order to charge the traction battery pack. The charging port is often referred to as a vehicle inlet. If the car is charged with power from the conventional electricity grid it requires an onboard charger which is a power electronic converter. The power of the converter is made of high power semiconductor devices which act as a high speed switches. Different switching states after the input voltage and current through the use of capacitive and inductive elements. The result is an output voltage and current which has a different magnitude and waveform compared to the input. The onboard charger converts the incoming alternative current or AC power supply via the charge port to direct current or DC power for charging the traction battery. The onboard charger is like a phone charger but can handle much higher voltages and powers. Next is high voltage traction battery which is the heart of any electric vehicle. Generally the battery is located at the bottom of the car but it is very depending on the manufacturer. The role of the battery is to store energy for the propulsion of the vehicle. The battery has a battery management system that monitors and regulates the battery charging characteristics such as voltage, current, temperature and state of charge. The energy content of a battery is normally expressed in what hours. Nowadays electric cars have battery sizes in range of 10 to 100 kilowatt hours. Let us now look at the battery technologies that have been applied to electric cars. First is the lead acid battery. The prospects for the use of the lead acid batteries in the electric vehicles are limited due to the low energy densities, sensitivity to temperature and life cycle. Next is the nickel metal hybrid batteries. They have been extensively used for traction purposes and are optimized for high energy content. Finally, the most popular are lithium-based batteries. Lithium batteries are classified by the type of active material into lithium ion, liquid electrolytes, lithium ion polymer electrolytes. The lithium ion batteries is generally preferred for electrical applications mainly driven by high energy density. The table shows how a 20 kilowatt hours lithium ion battery has a much lower weight than its competitors. When we drive the electric car the power flows from the battery to the motor and to the vehicle accessories like light and audio system. To regulate the power between these devices, it is necessary to use a power-electronic converter. In an electric car a DC to DC converter steps up the DC voltage of the traction battery pack to a higher DC voltage needed for the to run the motor. A secondary DC to DC converter, not shown in diagram, is used to step down the voltage of the traction battery pack to charge the lower voltage accessory battery. The next component is the motor drive. The motor drive controls the speed, torque and rotational direction of the motor. Depending on the motor the motor drive is a DC or a DC inverter or a DC to DC converter that is used to control the power flow between the battery and the motor. Unlike the power converters we have seen earlier, the motor drive is a bi-directional converter capable of delivering power to the motor for propulsion but to remove it out of the motor for regenerative braking. We will now look at the electric motor which together with the batteries are the two vital parts of an electric vehicle. The electric motor is responsible for converting electrical energy to mechanical energy for driving the wheels via the transmission. Normally a single gear transmission with differential is used as opposed to variable gears found in combustion engine vehicles. This is why electric cars are automatic gear cars by default. This is due to the unique nature of electric motor to deliver close to full torque at all speeds. Further the same electric motor can be used both as a motor and as a generator during driving and regenerative braking respectively. Four types of electric machines have been used in both plug-in hybrid electric vehicles and battery electric vehicles today, namely brush DC motor, induction motor, permanent magnet motor and switch electron rotor motor. It can generally be concluded that induction motors and permanent magnet motors are the most popular when considering various parameters such as control, efficiency, power density, reliability and cost. Finally the last two key parts of an acting car are the auxiliary battery and power electronic controller. The auxiliary battery provides electricity to start the car before the traction batteries engaged and also powers the vehicle accessories. The auxiliary battery is usually well-volt for car and vehicles but may be increased to 48 volt for future vehicles. On the other hand the power electronic controller directly controls the different power converters and hence indirectly the operation of the battery motors and the vehicle. It uses the driver, accelerator and brake pedals to control the power flow and select the operating mode between the driving and regenerative braking. It controls the onboard charger and the battery charging together with the battery management system. Now that we know the different parts of electric vehicle let's have a look and how does the electric vehicle based on electrical power flow. This vehicle shows the typical electrical layout of the components in an electric car as seen earlier. Let us analyze it step by step. The power is delivered from the AC grid to charge the battery via the onboard AC to DC rectifier and DC to DC battery converter. When the car is in driving mode the power provided by the battery goes through the battery of DC DC converter to the high voltage DC bus. Then the DC to AC inverter of the motor drive sends the power to the motor. The motor then converts the electrical energy to mechanical energy and it is sent to the wheels via the transmission. Further a DC to DC unit direction of converter steps down the voltage from the high voltage DC bus to charge the auxiliary battery which is in turn powers the electric vehicle accessories. To wrap up the traction battery the electric motor and the power electronics play a key role in the operation of an electric vehicle. Since power is exchanged between these components electrically using cables it provides great flexibility in the design of the car. This flexibility is not possible with cars with the mechanical drive train due to large size and weight of the mechanical components." "Non-CO2 Greenhouse Gases: Global Warming Potentials","https://www.youtube.com/watch?v=0KRfXuVQb68","So far, we have just talked about CO2. We just have seen last week that are important other greenhouse gases to consider. How can you compare the impact of those other gases with the impact of CO2? That will be the main topic of this video. The relative impact that is greenhouse gas has on global temperature is different. The difference is partly caused by the fact that it's gas has a different lifetime in the atmosphere. Methane, for example, breaks down much faster than nitrous oxide. Another important factor is that each gas has different radiation properties, which defines how they absorb infrared radiation. To take these differences into account, the concept of the developed as called the global warming potential. The global warming potential or GWP is calculated by determining the impact of the emission of a certain amount of gas, say one tone, over a period of time, compared to the impact of one tone of CO2, usually the time period of 100 years is used. For those who like equations, this is how it works. The integrate the radiated forcing, which is the impact that gas has on the radiation balance of the earth. It depends on the radiation efficiency of the gas and the time dependent decay of the gas. In the nominate of this formula, you see the integration for the gas for which we want to know the GWP, and in the denominator you see the integration for CO2. So, in summary, the global warming potential is relative to the global warming potential of CO2. If a certain gas has a GWP of 5, it means that it has a global warming potential, 5 times greater than CO2. The calculation for the GWP are complex. You will see a summary of the outcome of such calculations here. These are the global warming potentials, as they were determined by the IPCC for the most common greenhouse gases. You see that in all cases, the global warming potentials of non-CO2 greenhouse gases are higher than those of CO2. For example, the global warming potential value of 27 of biogenic methane. This means that the emission of one ton of methane will lead to a 27 times higher impact on temperature rise than the emission of one ton of CO2. We say that the emission of one ton of methane is equivalent to a meeting 27 tons of CO2, at least when we look at the impact over 100 years. For this reason, when we are talking about greenhouse gas emissions, we talk in terms of CO2 equivalent units, often denoted as CO2E or CO2EQ. With this unit, we can compare the environmental impact of multiple gases. Let us go through an example. Assume that we use natural gas, but that during the production and transport processes, 1% of natural gas is leaking. We start with one giga-zool of natural gas, and as we have seen, the combustion of one giga-zool of natural gas will lead to the emissions of 56 kilograms of CO2. Considering that one giga-zool of natural gas weighs about 18 kilograms, the leak is a 1% means that we also emit 0.18 kilograms of uncomvasted methane. Looking at the GWP value for fossil methane, we can see that 1 kilogram of methane emissions has the same effect as 29.8 kilograms of CO2 emissions. So, the point 18 kilograms of methane will then have the same effect as 5.4 kilograms of CO2. We therefore say that the methane leaks leak contributes 5.4 kilograms of CO2 equivalent emissions. The total emissions is the sum, so 56 plus 5.4, which is 61.4 kilograms of CO2 equivalent. So, what we find here is that if there is a subsensigas leakage and 1% is a subsensig leakage, then the methane emissions contribute substantially to the total impact of the use of natural gas. It is therefore important to avoid such leakages. In earlier videos, we have discussed extensively about the emissions of CO2 and how they can be reduced, but emissions of 0.02 greenhouse gases can also be reduced. We can distinguish three different ways to mitigate 0.02 greenhouse gas emissions. The first one is to avoid the activities. For example, we could add eat less bovine meat to reduce methane emissions associated with room units. Or we could stop digging up coal to avoid emissions from coal mining. The second group of strategies would be to avoid the formation of the gas, for instance, instead of landfilling waste, it could be composted in which case your organic material is broken down without the formation of methane. Finally, if gas is formed, it could be captured and destroyed. In general, for industrial sources, emissions can be reduced to large extent. For agriculture sources, also emissions are possible with often too small degree. Overall, also for non-sue to greenhouse gases, very substantial reductions are possible. And the misreduction of such gases should be an essential part of any climate change mitigation strategy. So, we have now reached the end of the second week. The concepts introduced in these four videos will allow you to better understand the accounting principles used for measuring energy and greenhouse gases. These are foundational concepts necessary for understanding climate change mitigation, and are important for next week when we will go into detail on the different aspects of emissions reduction. I hope you are enjoying the course thus far and good luck with the additional material for this week." "Global CO2 Emissions from Transport","https://www.youtube.com/watch?v=zb8wnuhVcuc","Now that we have seen the differences and transportation emissions between countries, we will discuss their sources. There are two types of transport emissions, the exhaust and the non-adjustment emissions. Greenhouse gas emissions from transportation primarily come from burning fossil fuels for our cars, truck, ships, trains and planes. Majority of the fuel, useful transportation is petroleum based, which includes primarily gasoline and diesel. In addition to the fuel type, there are other sources of pollution and transport. Some researchers report that pollution from tire wear can be as harmful if not more than exhaust emissions. Harmful particulate matter from tires has been going up, with the increasing popularity of large heavy vehicles such as SUVs and growing demand for electric vehicles, which are much heavier than standard cars because of their batteries. However, non-exhaust emissions are thus far completely unregulated. Non-exhaust emissions are particles released into the air from breakwear, tire wear, road surface wear and resubvention of road dust during ongoing vehicle usage. Escalcluded by a 2017 study, tires are responsible for an annual release of 550 tons of airborne particles into the environment, making up an estimated 10% of microplastic waste in the ocean. It is projected that while the petrol and diesel emissions will be significantly reduced in the next 10 years, a study increase entire a road surface wear is expected. Further, when tires are not recycled properly and burnt instead, dangerous levels of zinc and chlorine are released posing health risks for the population. Currently, there is no clear legislation in place to limit or reduce non-exhaust emissions. Based on California air resources board, the non-exhaust source emission factors include breakwear, tire, slash road wear and road dust resubvention. Breakwear is impacted by factors like break materials driving conditions or vehicle load. Tire, road wear is influenced by the type of tire materials or driving behavior and lastly, road dust resubvention depends on the driving environment and whether it's urban or rural setting and driving speed. In effort to cut greenhouse gas emissions and reach climate neutrality by 2050, European Union has passed a new regulation that will require all tires to be labeled. This way that customers will be able to make more educated choices and hopefully contribute to lowering the non-exhaust emissions. Additionally, European product registry for energy-librating database was designed. And this database, each supplier can register their tires before selling them under your PN market. Suppliers could register their articles starting October 2020, however consumers were able to access the database in May 2021. This database will be accessible to the public and will aim to educate the consumers. Each new label will include information on energy efficiency which is indicated by letters from A being the most efficient to E having the worst efficiency, breaking performance on wet surfaces, again A, meaning the best breaking performance and either worst. Followed by the noise pollution indicator and weather that tires are winter or Nordic. That are also called snow or friction tires. Fuel efficient tires have low rolling resistance, that require less energy than standard tires to propel them in the direction of travel. The easier it is to roll the tire, the less heat is generated and the less fuel needed to propel the vehicle. When tires heat up, the threat will immediately wear down. Although the exact fuel savings are not clear and will vary on the vehicle type gas versus EV versus struck, the typical row-low-rolling resistance tires should save somewhere between 1 and 4% per gallon compared to a traditional all-season tires that do not have the low-rolling resistance features. With regard to some policies that could increase or decrease non-exhaust pollution, we should probably mention the one that is relatively well known in the EU countries. It states that all drivers are required to have winter tires in the winter season. Same policy however, does not indicate that non-winter tires are required in summer months. For financial reasons, some drivers use the winter tires in the summer months which leads to increased pollution as the winter tires are not designed for hot temperatures and therefore release more harmful particles. Although in recent efforts to slow down global warming, many countries have started to address non-exhaust pollution. The non-existent emissions have been remaining on a dress and unregulated in many regions in the world." "Power Systems and Cybersecurity","https://www.youtube.com/watch?v=EjEUbpUAAP8","Most of us take electricity for granted and consider it a basic necessity, but did you know that besides the challenge of balancing power generation and consumption in the power grid, cybersecurity is becoming critical as well? In this video we will define cybersecurity. Furthermore we will explain why cybersecurity for power grid is becoming important. So what is cybersecurity? According to Cisco, cybersecurity is the practice of protecting IT systems, networks and software against digital attacks. This attacks aim to access modify or destroy sensitive information, blackmail users to interrupt the normal operation of the IT system. Cyber security aims to preserve mainly 3 key attributes for data and IT systems. They form the cornerstone of any organization security infrastructure. In fact, this key attribution function is goals and objectives for every cybersecurity problem. The first attribute is confidentiality. This means that the information on our IT systems and the data communicated through the IT networks are not accessed by third parties. Confidentiality measures are designed to prevent sensitive information from an authorized access attempt. It is common for data to be categorized according to the amount and type of damage that could be done if it fell into the wrong hands. More or less, stringent measures can then be implemented according to this categories. The second attribute is integrity. This means that the data is not manipulated or altered by third parties. Integrity involves maintaining the consistency, accuracy and trust, or thinness of data over its entire life cycle. Data must not be changed in transit and steps must be taken to ensure data cannot be altered by unauthorized people. Finally we have availability. The utility indicates that information is consistently and readily accessible at all times for authorized parties. This involves properly maintaining hardware and the technical infrastructure and systems that hold and display the information. So there is no denial of service or other disruptions which result in the IT system being shut down. Now that we have seen what cybersecurity and the CIA tried are, let's discuss cybersecurity for our grid. On this slide we have a typical representation of the power grid. On top, the transmission system interconnects the generation facility such as wind farms, solar parks, battery storage and conventional power plants. Control centers provide centralized monitoring and control capability to transmission and distribution system operators. On the lower lever, there is the distribution system which powers up our smart cities. Here we deploy EV charging infrastructure, electric transportation, micro generation, smart homes and smart buildings, heat pumps and smart metering. The internet of things connects all these smart devices using machine-to-machine communications over 5G. Digitalization is one of the key drivers for the energy transition. Layers of information and communication technologies are rapidly deployed at all levels on the top of the power infrastructure. They are used to operate the power grid in a flexible intelligent way. However, the information and communication technologies introduce reliability and cyber trends. Furthermore, with the rapid pace of grid digitalization, it is increasingly more difficult to keep the private communication networks of utilities separate from the public communication networks. Because of this, the attack surface of the smart grid is significantly growing. Therefore, the power grid is becoming more and more susceptible to cyber attacks. But does this pose an immediate danger? The answer is yes. And we will show you a couple of examples. One of the earliest cyber attacks on the power grid was the Aurora generator test in 2007. This was actually an experiment carried out by the National Library for the US Department of Energy. In this attack, they targeted the control system of a two-megawatt diesel generator. As a result, the generator started shaking and smoking, which caused physical damage. In 2010, it was reported that the StalksNet malware targeted a nuclear facility in Iran. The aim was to take control and damage the industrial control system of the facility to impede the nuclear enrichment process. This was the first and most sophisticated malware to date that was used in a cyber attack to target industrial control systems. It is also possible to target the whole grid infrastructure rather than only individual components. If hackers can intrude the control centers for power grid monitoring and control, they can maliciously disconnect transmission lines, generators, substations, and other power system components. This will lead to cascading failures in the grid resulting in a power system blackout. This is catastrophic as most of our daily activities are dependent on electricity. We even have proved that such cyber attacks on the power grid are already happening. In 2015 and 2016, two cyber attacks were conducted on the power grid in Ukraine. The hackers started the distribution system by taking an authorized control of their industrial control systems. Disciberate acts resulted in power outages affecting hundreds of thousands of customers. The question then is if such small scale cyber attacks are already successful, how long it will take until the unthinkable cap will happen, a complete blackout in Europe. To recap, in this video we have defined cyber security and look at the three critical security attributes for IT systems and data, confidentiality, integrity and availability. Furthermore we are rapidly deploying IT and operational technologies for power grid. They allow us for better monitoring and control, but it also makes the grid more susceptible to cyber attacks. We already have proof of various cyber attacks on industrial control systems around the world. That resulted in equipment damage and power outages. However we are not sitting still, but we are already analyzing this events to prepare for the future. And after following this module, you'll be prepared as well." "Integration of Renewable Energy Sources","https://www.youtube.com/watch?v=2gl2yTz1ZnY","The current energy system has been developed for the accommodation of conventional electricity generation. But now we need to change this system. We need to adapt it and make suitable for more renewable energy sources in order to facilitate the energy transition. Without impacts, we'll enlarge a amount of renewable energy sources have on the energy system. And how can we adapt it to accommodate the energy transition? Today's objective is to discuss the impact of introducing a large amount of renewable energy sources into the system. In this video, we will focus on the electricity networks. Let's start with wind energy. For wind turbines to operate efficiently, we need of course wind. It makes sense to locate the wind turbines in the areas with the most wind available. For Europe, these are indicated by the dark red areas on this map, plotting wind speeds up to 10 meters per second. The most suitable areas are in the coastal regions of Europe, especially the North Sea and the ocean in the northwest. The same holds for sun energy. In order to generate electricity efficiently with solar panels, you need to have a high level of irradiation. And the sudden parts of Europe have more irradiation than the northern parts. So it makes sense to generate solar electricity in southern Europe. The cost per kilowatt hour produced are less in Spain and Italy than in northern Europe. This is solely related to the fact that the cost of both wind and solar installations is mostly k-packs. And the operational cost are fairly low. So the number of operating hours immediately impacts the efficiency of the generation and the cost per kilowatt hour of electricity produced. Electricity consumption, however, happens where people live. In cities, spread all over the continent. As the generation of renewable energy is highly geographical specific, either in the wind in northwest or the sunny southern health of Europe, we need to bring electricity from this location to the consumer. And to do this, we need long distance transport. Long distance transport can also help to overcome seasonal fluctuations in renewable energy sources. In the winter, when there is less sun, it may be necessary to transport electricity from locations further away, perhaps as far as Africa, to ensure that electricity demand can be met. So as the amount of renewable energy in the energy makes increases, our energy system will become increasingly reliant on long distance transport. This increasing reliance on electricity transport will have impacts on the transmission and distribution network. The current electricity network is made up of a kind of chain between generation and consumption. Consumers are connected to the distribution grid, which is fed by the transmission grid. The transmission grid is in turn being fed by the generators. The large scale renewable power generation will be linked to specific geographic regions, for instance, northwest Europe for wind and southern Europe for solar power. If this is going to develop further, we will need a huge amount of transport all over Europe. And a huge amount of new transmission links to facilitated development. For the large wind farms in northwest Europe, they will feed electricity into the high voltage grid. So there must be additional transport capacity at shorelines. In addition, smaller scale renewable generation such as solar parks and onshore wind turbines, typically feed into the distribution grid. This can be a problem in sunny areas with cheap land if many solar parks are built. All solar farms will produce power at the same time and so can cause congestion in the distribution grid in those specific locations. This is a particular problem right now in parts of the Netherlands. So additional distribution capacity must be built to accommodate this. At the same time, consumers are starting to generate electricity themselves, mostly through rooftop solar. So one effect of the energy transition is that there will be more local renewable generation. Consumers that generate electricity are also called prosumers. For prosumer buildings, two-way operation of the electricity network is needed. This will allow power to be injected in the grid during the day when the sun produces electricity. And with the role of power from the grid at night, as the grid has previously been designed for one way power flows, this must be considered in the future. These are just a few of the impacts that a large amount of renewable energy sources can have on the electricity system. And some of the solutions that will be needed to solve it. However, there are many more that have not been touched on here, such as the impact on electricity markets and electricity prices and on security of supply. To which extent will these impacts be relevant for the energy transition?" "Transport of Heat by Water Systems","https://www.youtube.com/watch?v=Z4MWOrzcjzA","Welcome to this presentation about the transport of heat by water systems. Transporting heat with water instead of air allows to use much smaller pipe diameter because of the much higher specific heat and density of water compared to air. In this lecture you are going to learn about this water network and most important about the surface areas needed. Let's start with the heat conversion system, which may be located differently depending on the type of natural resource used. In general, systems using combustible resources like gas, oil or bio gas will be placed on the roof in such a way that in case of explosion the damage to the building is limited. It is sometimes possible to place it in the basement or somewhere airs in the building, but it must be then placed in a special explosion proof technical room. Electric and boiler scanner heat pumps can be more safely placed in a basement. Especially a ground source heat pump will be placed close to the ground. The same happens when an aquifer thunderstorm or edge system and a test is used. The geothermal district heating is used or any other kind of district heating. The energy conversion takes place elsewhere in the neighborhood and there is only a heat extensor, generally placed close to the street level that is plugged into the district heating network. Exactly as with air systems there is a wall network of pipes bringing the hot water and its energy from the central heat conversion system that we described in previous slide. To each room or space in the building that needs to be heated, instantiated by storm, the heat conversion system is called the generator. The radiators of fluidity are called the emitter systems and the water pipes including all valves and pumps are called the hydronic system. We don't saw the pumps and valves here. In another lecture that is pipes are very small, no more than a few centimeter diameter, so especially that is not such a big deal and the architect will not mind too much. The pipes can be visible or worked out in the walls and floors or ceilings. In that case flexible pipes may be used. Of course the central pipe is larger than the pipes to the room, but diameter is remain very limited. The internet work as in the figure is called a one pipe system and is old fashioned and should be avoided as it leads to a decrease of temperature at its radiator. Look at the first radiator. The hot water cools down in the radiator and is re-injected in the one pipe which goes in the temperature decrease in the main pipe. The last radiator gets water that is quite cold by which the heating power will be very limited. This must be compensated by installing larger radiators. The further from the generator the bigger radiators additionally such a system is difficult to control, so this configuration should be avoided. So in an energy efficient heating network a so called two pipe system the radiators should be placed in parallel like that. There is a supplied pipe in red and a return pipe in purple. In fact they are connected to each other in a closed loop. This way all radiators get the same temperature levels and can deliver the same quantity of heat. Even more branches are possible like that. In the case of a building with five floors we would have five main branches as soon only two of them. This last configuration is very similar and is often used in combination with flexible pipes like in floor heating. There are then two headers. The red one is a distributor. The purple one is a collector. Finally, choice like there is a fan in an air handling unit to push the air through the building and to compensate for pressure drop in dots. We need pumps in the water system to push the water through overcoming pressure drop and high differences especially when the generator is placed in the basement. Generally the pump is placed at the return side. You see here one of the radiators in the hydraulic network of the building I am working in with its supply and return pipes and the connection between the pipes and the radiator. On the right you see the header of a fluid heating system with this distributor and collector. In general the velocity of water in pipes should be limited to 1.2 meter per second to avoid noise and pressure losses. In another course and during week one sorry you learned how to estimate the heating load of a complete building as the same of transmission ventilation and infiltration, solar gains and internal heat gains. Let's take again the same building as previously. And imagine you have calculated the needed heating capacity which is the maximum nominal load during cold weather. Imagine the total heating load is 600kW. There are 5 floors. Imagine there are 16 rooms per floor and they all need the same quantity of heat and we neglect the corridors. With a referral need to bring 600 divided by 5 divided by 16 is 7.5kW. How large should the radiators be? Now look at the energy delivered by emitters and study it on the example of a radiator. It would be exactly the same for floor heating. Let's call the temperature of the water entering the radiator at the supply side. T.S. and the temperature of the water leaving the radiator at the return side T.R. You see them on the picture. We will take as example T.S. equals 80 degrees Celsius and T.R. equals 60 degrees Celsius. This radiator is nothing else than a heat exchanger exchanging the heat contained in the hot supply water to the room air T.R which is the reforidity while the water is cooled down to T.R. The water in the radiator loses m.C.P.T.R minus T.S which is given to the indoor air air. So the heat delivered by the water in the radiator is Q.S.M.C.P. times the temperature difference between the water coming into the radiator at the supply side T.S. and the water leaving the radiator at the return side T.R. We can see this example. We know that Q.R.M.C. is 7500W and we know the temperatures. So we can easily estimate the needed mass flow rate m which is by the way controlled by the valve mounted on the radiator at the supply side. In our case it is 0.09 kg per second. We can say size the pipe. The volume flow rate m divided by a row to density of water, 1000 kg per cubic meter. So the volume flow rate is 0.09 divided by 1000 which is 9 times 10 to the power of minus 5 meter per second. Divide it by the maximum velocity of 1.2 meter per second since the reverse across sectional area of 7.5 to minus 5 leading to a radius of less than 5 millimeter diameter of less than 1 centimeter. More interesting is to remember that this heat is delivered to the room air through the surface area of the radiator according to this second equation telling us that the heat exchange between a surface at a certain temperature and surrounding air is the temperature difference between the surface and the air times the surface area times a heat transfer coefficient alpha representing convection and radiation it transfer. The surface of the radiator has not the same temperature everywhere. It is 80 at the start and 60 at the end. It is possible to show that this mean is not exactly the arithmetic one but that it can better be calculated using the logarithmic mean temperature LMTD as defined below on the slide and explained in the presentation on heat exchangers. Please also note that alpha is the convection and radiation coefficient depending on air velocity and design of the radiator and air is the area of the radiator. If you know the temperature level, T, supply and T return and the quantity of heat to be exchanged you can then relatively easily estimate the needed heat exchanger area the area of the emitter, terra-for by using the low of conservation of energy by which the restarts of a question one should be equal to the restarts of a question two leading to the formula below. The surface area A determines the size of the emitter radiator in our case and therefore it costs. If we take again our example with a heating load of 7,500 watts and supply and return temperatures of 80 and 60 degrees, this leads to a LMTD of 49.3 leading to a surface area of 20 square meter. This seems big but the power of 7.5 kilowatt corresponds to a large room of about 125 square meter in such a large room one could place eight radiators of 2.5 meter by one for example. As we will see next slide there are also lots of ways to increase the effective surface area of the radiator without increasing its size too much. Produce of radiators have generally done all these calculations for you and you will just find product descriptions describing the size of the radiator for which maximum heating capacity it is suitable. If described to slides ago that the heat transfer coefficient alpha consists of a convective and a radiative part. When it comes to products on the market it is very confusing that producers sometimes categorize their emitter into radiant heating or convective heating. In fact all products always have both components according to the laws of physics. In some cases the emitter has just one flat plate like this one. In other cases the plate is corrugated like on the radiator above and on this picture. Additional plates are added increasing this way the effective heat transfer surface for convection. So in general we can rewrite the heat transfer equation like that taking into account the outer radiative surface A and the surface for convection A C. The radiative share can then be calculated easily. In general if the radiative share is higher than 50 to 60 percent we will speak about radiant heating. The flat radiator on the picture is such one. If the radiative share is lower than 50 percent we will call it convective heating. The radiation radiators have in general a radiative share of 22 to 30 percent but keep in mind that both radiative and convective heat transfer are always there. I also remind you of what you've learned in the course hales and comfort in buildings being that the temperature of the surface is very important for the comfort. In this lecture we looked at the location of heat generators and then studied ironic networks which are the water duct distributing the hot heating water to the local emitter radiators of fluidity for instance. We saw that a good ironic design with radiators in parallel is recommended. We also discovered that the diameter of water pipes is really small and we described the way to calculate the size of the radiators needed to heat rooms. Thank you very much for listening." "Transport of Air, Heat and Cold","https://www.youtube.com/watch?v=UTLQgQErkzo","Hello, welcome to this lecture in which we are going to make an inventory of the different ways to bring hair, eat and cold in the rooms of a building. Or three are essential to a healthy and comfortable indoor environment. It is good to realize first that there are different traditions all over the world how to do that. Some of you may know only air systems, while others are familiar only with radiators and others only with room air conditioning. So let's bring structure in all these things and make sure you get the full picture. Let's start with the transport of hygienic air. Hydrogenic air is the quantity of clean outdoor air that is needed to maintain a good indoor air quality. You have learnt in other classes and courses that the minimum of 25 cubic meter per hour per person is needed and that higher values are recommended between 36 and 50 cubic meter per hour per person. It is very important that this clean air is brought directly where people need it, which is in the building's rooms. As for the transport of air ventilation systems can be divided into four types. The completely natural ones through windows or opening ingrips. You see here a section of a building with a corridor and rooms at both sides. Second, the systems with an exhaust ventilator pushing the air out of the building while the supply air comes naturally in. Third, the ones with mechanical supply where the air is pushed inside the building and then flows away through window openings and cracks. And fourth, the ones with mechanical supply and exhaust generally equipped with a heat recovery heat exchanger. Only the two last types, the ones on the right, allow for centralized air handling. So, in three of the systems pipes are needed to transport air. The more air is needed, the bigger the pipes. For an architect, it is extremely important to know already at the start of the design how big will be the pipes. He has to account for them in his special design and reserve enough space for it. So, as an indoor climate designer, one of the first things you have to do is to make a choice between one of these four systems and to estimate the size of the pipes if any. That was for the hygienic air. Let's look now at the transport of heat and cold. Let's consider a room with its heating or cooling demand that you have learned to calculate in another course by making an energy balance accounting for the heat losses and gains by transmission, ventilation and infiltration, solar gains and internal heat gains. If the balance is negative, there is a deficit of heat and the room must be heated. Otherwise, it would become cold, see on the left. If the balance is positive, there is a surplus of heat and the room must be cooled, otherwise it would become hot, see on the right. Basically, there are three main ways of bringing heat and cold in a room. First, you can generate the heat and the cold in the room itself. So, it's a no need for transport. This would be the case if an electric radiator or a stove are used in the room left or a room air conditioner like on the right. Very often, the equipment generating heat and cold is not placed in the room, but somewhere central in the building, like a home boiler, is placed in the toilet or the attic. In office buildings, you see often big technical rooms on the roof where the boilers and cooling machines are placed. Like on the picture, meaning that we need some piping to bring the heat and the cold to the rooms, and we need to choose which fluid will be circulated in these pipes. Basically, we have the choice between only two solutions. Air of water, which are abundant, cheap and harmless in case of leakage, how to choose between both. To be able to make a choice between water and air as transport fluid for heat and cold, we need to consider the size of the transport ducts. Let's start with closed system and consider heating. It works exactly the same for cooling. In a closed system, a fluid is heated by the heat generation system, a boiler for instance and circulated through a pipe and a heat exchanger to the room. In the heat exchanger, the fluid gives its heat to the room, by which the room is heated and the fluid is cooled. The cold fluid goes back to the heat generation system and is heated again and the cycle goes on. The quantity of heat needed by the room is q heating. So that is the quantity of heat that the fluid must gain in the heat generation system. This heat can be expressed very simply by this equation, saying that q heating is the density times the specific heat of the fluid times its volume flow rate times the temperature difference t in minus t out. Let's compare the properties of water and air. The specific heat of water is 4.18 kg per kilogram Kelvin, rho is 1000, so rho cp is 4 180. As for the air, cp is 1 and rho is 1.2, so cp is 1.2, quite a difference with water. Since difference must be compensated by a much higher flow rate of air compared to water, if we want to bring the same quantity of heat. You see below that the volume flow rate of air must be 3483 times the 1 of water what the difference. What does that mean for the size of the ducts? Well, the volume flow rate of a fluid can always be expressed as the product of the cross sectional area of the pipe and the velocity of the fluid, so it is p times r square times the velocity where r is the radius of the pipe. Because of noise and pressure, it is better to limit the velocity in ducts. In a manner a duct, the velocity will be around 10m per second. In a water pipe, it will be around 1m per second. So, by using these values in the equation above, we find that the radius of the air pipe should be 19 times the 1 of the water pipe. You see here how is that? In general, the water pipe leading from the boiler to the radiator has a diameter of about 2.5 cm. To transport the same quantity of heat with air, a pipe of diameter 36 cm is needed, which would look like this air pipe on the figure. And there is an additional thing to think of as that is the size of the heat exchanger in the room itself. In a lecture about heat exchangers, you have learnt that the surface area of the heat exchanger can be calculated with this formula. With t mean is the logarithmic mean temperature difference between t in and t out and h is the heat exchanger coefficient between the fluid and the room air. For water, h is around 10W per square meter Kelvin, while it is twice lower for air. Meaning that the needed surface area when air is circulating is twice the one needed when water is circulated. Such a lost of space is not acceptable and such air heat exchangers are never used. Such room air heat exchangers are never used in practice as I just said. Instead, open system are chosen. The duct is open and ends in the room delivering the warm air, which mixes with the room air and is exhausted back through a returned pipe, exactly like with a mechanical supply and exhaust ventilation system, like this one. There is only one problem here, as that is that the volume flow rate of high-genic air is much much lower than the flow rate needed to bring the heat or cold in the room. So the mechanical ventilation system must be much bigger and the ducts must be much larger like that. So this has important consequences for the architect who need to reserve enough place for the ducts. Very often, mixed system are used. The high-genic air is heated or cooled in an air handling unit, bringing this way a part of the needed heat and cold and the remaining part is done by water system. This saves a lot of space in the building. You see on the slide left a heating mode and right a cooling mode. The arising, you must make a difference between the transport of high-genic air and the transport of it and cold. For the transport of high-genic air, four systems can be used and only two of them, mechanical supply and mechanical supply and exhaust allow for preheating or pre-cooling of the air. In most cases, the flow rates of high-genic air are not enough to fully heat or cool a building. Heating and cooling can be done by a water system, bringing the needed heat and cold from the central generation system to each of the rooms. It can also be done by an air system in which large quantities of warm or cold air are brought to each room. The ducts with such an air system are much larger than the ducts used in a water system. When we often have mixed system issues, taking advantages of both. Thank you for listening, goodbye!" "Webinar Online Learning TU Delft - Your Questions Answered","https://www.youtube.com/watch?v=MUfFyTQcMEg","My name is Tracy Davis. I am an online learning developer. Bayer, my colleague is with us today and she is also an online learning developer. And Joanna is in our marketing team. So she is also with us today. So all three of us will be taking you through the presentation and we'll end to answers many questions as possible at the end. So we look at the about us. They will look at some frequently asked questions that we do get on our support networks, on our horses. We ever come through from learners and from people across the board. And then we'll have the live Q&A like I said. And I will go ahead over to Joanna. Yes, thank you very much. So I'll start off, hello everybody again. This is Joanna with the Office of Communications. So you might have heard of me before. You might have received my emails and that's how you found this webinar as well. And I'm going to start with a little introduction of what online learning that to you do looks like and I'm going to walk you through our brief history. So just to let you know, online learning that to you does has been there for quite some time. We actually started in 2013. So we've been on the market, educational markets for almost a day. And we've been a recognized user in this market. We've developed many, many courses. We are now, we now stand at 211 online courses out of which we have 130 massive open online courses on an export form, which you might have. We also have 50 set of professional educational courses, 24 online academic courses, one mic semester and 25 different programs, which is a combination of a cluster courses similar to each other in a similar field. So up to date, we have over 3.3 million registered users on our two platforms where we work, one of the miss Alex, one of the one of the miss and our own platform on my education platform. And of course, this is counting as we speak more and more users daily come to gender networks and our courses. We develop the courses with the mission to educate the word for better really. So we want to equip people with skills and knowledge that will help solve today's challenges. In that sense, we also want to support them in their career paths and allow them to grow. And we do that together with over 240 academic experts to to that help us help us develop those courses. We work with lectures, professors that work on the research in a particular field and then help us put that for you into the online academic course format. We've been recognized in the online education field as well and the proof of that is probably all the words that we've got. We apparently have 32 different awards in open and online education, which were very proud of. So yeah, we hope to continue to journey with you too. Our courses are around seven most important portfolio themes and these are energy transitions sustainable cities future transportation. Quantum computing. We also have courses in medical technology skills for engineers or AI data and digitalization. So even if any of those sectors interest you. Check out our website for more courses in one of those. Great. Thank you, Janet. So we are a multinational team for a multinational audience. I am from South Africa. Baya is from Spain and Janice from Poland. So please let us know in the chat where you are from and where you are dialing from today. It's a short note. Yeah. From Colombia and Lebanon, Australia nice. Hola. Hello. And from the Netherlands, of course, and from India. Fantastic. Hello, happy USA. Early morning there. Thank you for joining us. Fantastic, guys. Great. Thank you, everyone. I'm going to hand over to Baya. Thank you. So okay. What the situation is. Pretty much on a daily basis. We received questions from many potential learners around the world. So what we did in order to prepare for this for this webinar is have a look at it. So what are the five most frequently asked questions? And. So we're going to cover those questions first get those out of the way and then pretty much is when we're going to open a floor to see if we need clarification anything we have mentioned or if you haven't either any other questions. So what is all this about what is online learning about how do what is the take of what how do we talk about online learning at the tail. And for us. We like to talk about three main pillars which is three that we have there and the first one is is about flexibility and what this means is when you register for a course. You will have twenty-four seven access to the course materials. So this means that you will not be ties to a schedule, but you will choose when you want to study and where you want to study. We also have this idea we can extend this idea of flexibility when it when we talking about the content so what you want to study because we have a big portfolio as a journal mentioned a little a little I like also you have that freedom also to choose the course of the program that actually fits your needs whatever it is the gel that you're looking for. Expert knowledge Joanna also taught you so one this briefly when we create a course we work together with. Staff at the university who have the expected biological skills but are the other leaders in research in in their own fields and there is that connection because the other is a technical university we have that. That direction connection with industry that is what we bring to our courses as well. We aim always that whatever you learn with us is something that you can apply directly into your own work and you know I did in the workplace or for your career. And that's a feeling that idea of collaboration is that you will never be studying in isolation so we have a wide audience in terms of as you can see the people that are here today but it is true that our course participants come from pretty much the four corners of the world and what we do through our courses is promoted that look that conversation. Not necessarily only with the course instructors but with all the course participants so in a way you get the opportunity to to extend your networks, the senior contacts and the people who are taking the course with your people who have very similar interest which means that yes you will be working with them for the duration of the course but you have the opportunity to actually continue continue with those conversations those contacts well after the course is it's finished as well. The question we ask is what type of courses that you do and we mainly work with with two different types of courses on the one side you've got massive open online courses which we know by by MOOCs and we have professional education courses which we call profits. So the main difference between the two of them is you will see if you register for a professional education course and it's all about an in depth knowledge of the subject that you have chosen. That doesn't mean that if you take a that if you take a MOOC that is always that's going to be a basic or introductory level we have some MOOCs that are actually also advanced level but what you but we always guarantee is that if you take a professional education course then that course is going to take you in depth much more in depth that any of the MOOC into the subject that you that you have chosen. The number of course participants also is very different at MOOCs and massive even if they're not as massive as they used to be years and years ago and but you will find yourself a student with a cohort probably hundreds of students were we we've seen is 500 plus assisted. While a professional education course when you will find is a much much smaller group you could be as small as as 10 people only studying with you we collaborate with Alex so we run our MOOCs on the Alex platform. Why the professional education course is run on our own platform which is the the data on line platform. We finish the course and if you finish the course successfully you will receive a certificate and it you will receive a well received certificate both if you study a MOOC or if you study a profit there is a difference between the two certificates but I will touch upon this little a little while. In a little while and while MOOCs can be either in structural paste of self based our professional education courses are mostly in structural paste and I. It's giving the flood to Tracy to go yeah to expand the SD for that. Thank you. So the difference between self based and instructor pace courses first of all with self based courses your course content is available at the beginning of the course so everything is there. And you are able to view any of the content from the start with instructor pace courses the content is usually released weekly or according to a release schedule so you are able to depend the study hours that will take you. But because it's so is so led by the instructor it is released according to a schedule and not open all from the beginning. Self pace. The time and work to earn pace so that goes back to one of the pillars that may as spoke about in terms of flexibility. We as instructor pace you are progressing through the course as a group so as a group of fellow students supported by your course team. With self pace the assignment deadlines are usually right at the end of the course so you have that flexibility once again and you can start an assignment in week two. If that is up to you and you can look at the assignment again in week 20 if the course is open that long. Whereas with instructor pace courses they are set deadlines for the assignments. So you will get a syllabus you will get an outline that will tell you exactly when those deadlines offer you reassignment. Self pace is facilitated by course moderators so there is someone monitoring the course and monitoring the discussion forums. In instructor pace courses a lot of the times we have live sessions from course instructors. This can be very different formats and look different way sometimes it's live feedback session from the course instructors. In self pace courses the assignments are usually peer or self assist in instructor pace courses they are staff assist assignments and I can mention about a live session recording feedback you may have a session that is feedback related directly to the assignments. So let's look quickly at one example of the course one example of a MOOC. So in this case what we have here is facades and engineering is a MOOC that was created at the faculty architecture and collaboration with the faculty of our of our architecture and the course is very much. About getting a complex the complex principles that apply to building facades and engineering facets and try and explain those in a more accessible way. The course materials comprise of readings and mostly videos on these videos the course instructors have traveled around Europe interviewing experts in situ about the facade they were responsible for. And after the video you will always find quizzes your exercises a little text task that will help you reinforce the content the concepts explained in in the video for this particular course as well. At the end of which module there is a main task and is this task is whatever same to you as a how important it is to be able to apply what you learned to your own career on sessions so in this case the participants are asked to choose a project choose a facade that they want to. And then you can work on for the duration of the course and each week apply what they learn to that facade for example if it happens that that particular week that we're more usually about detailing the tolerances or the materials of the facade. Then each student will have to apply that looking to that particular aspect if with reference to the own project that gets shared with all the other students and everybody gets to comment on each of his works or there is a mark that idea of feeding back and learning from also the other course participants. In this particular course as well now it is running as a self based course when it ran as an instructor paste of course there was a live session with the within instructors that was not compulsory so if you wanted to go you could go I if you didn't want to go you didn't have to go what it was an opportunity to handle again have the conversation ask questions directly to the course to the course instructors. So yeah. So now we look at a professional education course from our technology policy and management faculty this is multi stakeholders strategies analysis for willing coalitions. This professional education course as you can see we give you an idea of what to expect every single week as we mentioned earlier the content is released on a weekly basis so yeah you have that idea of what to expect from week one to week six. The method modules poll that is on the left of the image of that the idea with this professional education courses that you almost choose your learning genes so. The learners are able to select which assignments and which method modules they would like to do. And we use the poll so that the course instructor is aware of how many people will be doing each of the assignments and how that spread is. They are the live Q and a session so there's usually a page that will ask you when you would like to join the live Q and a sessions and give you some. And options in terms of times and also looking at different time zones so trying to keep that relevant for the entire audience there's a snapshot you are one of the live sessions that we had in the previous run of the course. And so with this course they were feedback sessions as well that we're little bit more one on one so that's possible with the professional education course as we mentioned usually the audiences are smaller groups. And to get that real practical application to your day today work in your career sometimes the one on one sessions or the live group sessions are extremely valuable. One of the other questions that we get quite a lot and maybe some of you have that question to is how much time do I actually need to invest in online education in a particular course and to be honest often objection is can I actually do it as a working professionals. So we get that question a lot questions are coming from people who work full time mainly full time and answer to the question is yes we think you can if you plan ahead. That's what the online education is really for all my education is designed for working professionals so the study time can vary. It's usually between three to six hours per week depending on the course and the duration of the course also varies the duration can be from four to eight weeks long we believe this chunks of pedagogical approach and education are the best for the learners. Which the for more information about the course duration and the study effort you can always check our website and a particular course of your interest because that information is available on the website on the data made a dataset on the right side. Second thing that we always want to then I would like to the people that ask this question is that of course our courses offer great amount of flexibility and that's the key to for your success. So whether they're in stricter pace of shared pace you choose when you want to learn throughout that particular week so you can plan ahead whether you do your homework on Monday maybe you do it on Wednesday or maybe on Friday because that's your. Free day you can also do your homework during the weekend that's that's also not an issue. All our courses include formal and informal assessment and there are deadlines sometimes mainly in the instructor based courses so called. Profits but even those deadlines can be discussed so of course we are here for you the instructors are here for you and they want to sure that you get the knowledge that you need and even those deadlines could be discussed could be pushed could be a little extended if you need that access base. Life sessions as some of my stress here in the mentioned. These are available sometimes but again if they they are there they are usually announced very early at the beginning you can probably see them already on our website of the courses website. So you can actually plan on in advance for them often you also don't have to take part in those you can watch the recording later. If you're not if you're in a label that then you have that option. Another question which we get quite a lot is actually. What is the required level of English in order to be successful in your courses so even though there are no tests that we require for admission. Obviously solid common of English is required to follow the course successfully. All videos are subtitles and transcripts of the videos are also available for you so you can watch it again rewatch it reread it if you need that extra support but mainly for the accessibility purposes. To be inclusive for those that need. Right and I give it to bear she will cover the certificate part that she spoke about before a little bit so. Exactly thanks very much so coming back to the certificate if you have paid for course both the MOOC and the professional education course if you have completed the course successfully so if you have a team the grave that in structure. This I that that deserves a pass great then in that case you will receive a certificate now this certificate is not so paper certificate that you are even opposed it's a digital certificate and in a way that allows you to. To share if in your social media. You will see how there's a difference between certificates that you will receive and if you take a MOOC on the edX platform on the certificate that you do that you will get after completing a professional course. And while the edX certificate is a certificate of participation so it says you have completed this course it does not give you what great you achieve is just you took part in this course. And the certificate of professional education includes the number of continuing education units and this is important because it's it's the official method to to measure the workload of the study time that you need to invest in in a course so this is particularly relevant if it in in your country or in your particular context you need to show evidence of your professional training. So normally the way it's calculated is one continuous education unit is equivalent to 10 hours of study time. Yeah. Right so over to you as our participants today. Is there anything else you would like to know we covered a lot in this little bit of time we have. But this is the time for further questions so please use the raise hand settings and all leave the questions in the chat box. Questions related to course specific content we may not all be able to answer. But we will try our best. Bernard also had a question minimum level of education for different is is is a different for different courses. I believe. In a way in a nutshell it could be different. Most of our program most of our courses are aimed for working professionals so if you're working in a particular sector you probably have enough knowledge to take a particular course. But there are courses such as online academic courses that do require you to have a specific level of education such as bachelor's degree in engineering or so. So again if you are unsure you can always check the requirements admission requirements there is a little top on each course website which specified those. Like I said usually there are no requirements but we aim to attract professionals to understand the concept that we are trying to convey so that's usually enough if you work in a particular sector. Or have experienced in it or worked with it before it doesn't need to be actually your experience now. Another question came in the meantime what is the average student strength in a professional course and can you do well upon the grading system which is normally followed that's an interesting one for bear or you chase you know. So it varies from course to course in terms of how the how the the assessment strategy works. So it is true that normally you will be required to get a 65% in order to pass the course and they do will have different opportunities different tasks in order to achieve that 65% have in set that there's also. We go by to the flexibility if for some reason you you feel that you you you feel that you want to be able to meet reach that percentage of or you find a particular task to. You will have the support of the of the instructor to get you over the line so it's not that you know it's not that if you if you don't reach 65% that you are going out know we will we will work together and know you in order to help you. Get over the the finish line I still I only don't understand what you mean by the average students strength so which you like to kind of elaborate on that or. Or maybe Tracy can you have an idea what this. Perhaps perhaps you're in numbers in terms of how many people usually join the professional education courses and those groups are usually quite. That doesn't mean that they're very small that usually around 20 sometimes that you learn is. In a professional education course because it is so focused and it is you know moderated and is staff graded assignments so those numbers are usually kept. Sort of in the 20 30s. Okay well then I think we can close the webinar session for today. Thank you to everybody who participated and rejoined those who came in. At times that too to them best thank you very much for being here thank you for. Giving us some extra questions in the chat to think about and respond to. I hope you all enjoyed the session and that you got something out of it and that we will see you in our courses very soon." "Delft. Day 0. Virtual Tour","https://www.youtube.com/watch?v=H7d47wv_RoI","Hello everyone, so we are at the P-U-Del campus and we are standing right in front of the balcony which is the faculty of architecture and we would like to take you inside the campus and if you are wondering some sort of little practice that you are doing in campus and around campus. So let's check. We are at the entrance of the book owner. As you can see we have a container for disposable batteries in old-fifth and shapes. Hi, so we are now here at the B.K. Green Initiative which is a group that introduces sustainable practices and the faculty. So this is a shelf where students can just put their residence materials for anyone to use whenever they like, for the models at the model halls. We are here to show that we have three different dusting, one for plastic, one for regular, one for paper. A special thing is that we also have a separate dusting for paper cups and we only use paper cups for coffee in the faculty team and this is used only for the paper cups which are being said for the new second. Oh, toilet center faculty, I will be the clock tower instead of the paper dorm. So that they can be used both and then be used every day. At the first time here if you bring your own up you can get a discount. Also there is another option to convince and write your environment to impact. While I came the true cost of the coffee that you are applying, also the extra money goes to the charity which helped to have more sustainable coffee. We have a new vending machine which has 100% plant based products which is milky free alternatives for healthier lifestyle. This promotes more sustainable everyday living for coffee as well and encourage students to use this instead of regular vending machine which has normally been in daily tasks. The news regarding consumption and promote healthy habits of our cafeteria. Only provides vegetarian food. All studios here at BK are equipped with the infrared sensors in these rooms so when people are away it's which is off and when people enter this which are not a market. Sorry in one of the studies of the BK. We have automated such a things system and when there is a fund in the fund on the firm side of the facade the blind came down automatically and on the side theaters no sign we can use the maximum thing lights. All the papers on the campus are collected recycle and used here for bringing it like this. This is a solar part we have multiple of these all around the campus. It is used to charge a GB. You also see that the amount of power is given on this small screen. One of the sustainability goals of the youth is to become planet youth by end of 20-30. What you see is there are a lot of developments going around the campus. One such is the use of all energy, heat the campus and this. Up the green village, green village is the lab for sustainable innovations. So any new idea or technology can be tested here in the community to see how it works and it is also efficient or not. Here you can off. To the stream teams are dedicated to the students who strive to become Europe's best at solving pressing issues by competing in worldwide competitions. Each student in a dream team dedicates a year of their academic life, working full-time on projects that propel innovation forward to pave a way for more sustainable future. To give one such example, our Vattenfall Solar Teams, formerly known as New On Solar Teams, has been participating in international solar races since 2001. In these two decades, they have won 10 World Solar Racing titles and set two World Records." "CSD01x Teaser PR video","https://www.youtube.com/watch?v=Eh-Q8tg1WZo","What do we actually mean with the term culture sensitivity? Why is it relevant for designers? From which type of lens can we examine culture? And how can we do that in the context of design project? With this course you go beyond the obvious towards the unexplored from the perspective of culture as a source of inspiration, product and service design. You will learn how culture sensitivity can open up your potential for innovation and design. Cultural sensitivity will help you, not only avoid mismatches between your designs and your intended users, but also bring a great source of inspiration. The course has a blended format and we will learn together, a design practitioners and master design students. As a design practitioner, you will work online and gain theoretical support for your designs and the creation of new designs and learn from the design students, approaches and ideas. As a design student from TU Delft, you will work both online and in the studio and learn about value theory in the design practice. In this way there is a mutual benefit. My name is Anna Mik van Buyer and I will be your teacher, moderator and motivator. Thank you." "Road Safety - Course Sample: What is that? A self-driving car?","https://www.youtube.com/watch?v=21SVqbWOnj8","How do we perceive automated vehicles? What do we think of them? And how do or can we like them? In this video we will discuss the interaction between automated vehicles and vulnerable road users, such as pedestrians and cyclists, but also the users of automated vehicles and the general public. First of all, let's make a distinction between some types of automated vehicles. And what we expect them to look and drive like. First, the most common one, the car. This one already looks a bit off, don't you think? For one, there is no driver or it looks to be in the wrong seat. Also, there are lots of cameras, lasers and whatnot attached to that car. It's looking quite strange. With such a strange looking car without a driver, I don't know how to communicate with it. Well, it's communicate with me somehow. A second type automated buses usually also come without a driver, but then have a two-words instead. Well, different from our expectations for the more is the rather low speed these buses have compared to our regular ones. Without a bus driver, how can I tell it to stop for me? Well, it's hold when it's right across the road. And what if I want to leave the bus? There ought to be some sort of interaction with it. Next to these two types, there are several other types of automated autonomous vehicles. Depending on the location they operate, whether they have a designated track to drive on, how it looks, etc. We have different expectations of these types of vehicles. So, there is no one size fits all. When it comes to our expectations to automated vehicles, so we need tailor-made solutions for all of these. To investigate to what extent we accept and are satisfied with this new technology, most commonly we use questionnaires, asking for people's opinion. Evidently, the answers they provide are subjective. And also, often they have opinions about future automated driving systems. They actually haven't experienced yet. There is, however, also the option to gather objective data, for instance, by means of video recordings either inside or outside of such a vehicle. With this, we can study people's behavior in and around on automated vehicle. And with these two methods, some initial results have been found." "Technology of Intelligent and Integrated Energy Systems Course #energy #multicarriergrids #hydrogen","https://www.youtube.com/watch?v=Mhw4AQYbGq8","Have you ever wondered how our energy system of the future will look for? Which technologies will we use to generate transport energy? How will we consume energy? And most importantly, which synergies among energy technologies can we exploit to our advantage? How do we technically achieve this? We address these questions in our course and invite you on this journey with us. In this introductory course, you will learn about upcoming energy technologies, focusing on their compatibility and integration. We will cover renewable energy generation, storage and consumption in the thermal, electric and gas sectors. We will look at electric transportation and its interplay with the energy system. Considering how an electric vehicle can be an energy storage for the greater. For the more, we will consider how to intelligently control and convert energy from one form to another at the system scale. In this course, you will learn how to design the energy system of the future, while exploiting the synergies among technologies and energy carriers. This knowledge will put you in the center of the energy transition as a skilled energy system designer. So join us in designing our sustainable future." "AI in Manufacturing - High-level Introduction to Artificial Intelligence","https://www.youtube.com/watch?v=dJ6JCJtZEpc","Hi, my name is Nathan Eski, I'm associate professor here at T. Delft focusing on AI in manufacturing, part of the aerospace engineering faculty. This video we're going to start off with a high level introduction of what AI is just to get our bearings and then be able to start digging into why we could use it in manufacturing and what the applications are. So we're going to be looking at what is AI, again very, very short version, we're going to look at what is AI good at today because it's a very fast evolving study and industry in practice. So we're going to be looking at that. We're going to look at where AI is not yet up to speed, especially in terms of being used to manufacturing. Then we're going to look at where AI really is unpredictable and why that's potentially a risk but also an opportunity. Then we're going to find the end with what does this mean for you? What does this mean for manufacturing professionals? So let's get started. So very short short version, AI is designed to perform a task without explicit programming instructions. We don't tell it line by line what to do. Instead, we have a framework that allows it to learn on its own. If we do it right, it learns very well and can solve problems for us. So we take input data and it learns through different types of algorithms, through feedback on what is doing right and wrong. Very similar to how any human would learn a new task. And that becomes a model that we then feed new information in order for it to make decisions, solve problems, depending on what we're trying to get done. So in terms of the terminology you might have heard, there's some different phrases out there, AI, artificial intelligence. It's machine learning. There's deep learning. There's neural networks. What do all those mean in contexts and in relation to each other? We have artificial intelligence, which is a broad definition that we just talked about. Within that you have machine learning. This is where there's actual input feedback adjusting of bias for a specific algorithm, where it tunes itself to become better and better at a task. And within that, there's a concept called deep learning. This is what is referred to sometimes as various neural networks. And that functions very much like our brains do and is very complex and has a lot of challenges, but can do some pretty amazing things. So how do you do AI? Well, you need some different things. You need to acquire data. Once you've identified what problem you're actually trying to solve and depending on how much data and how clean that needs to be, really depends on the problem you're trying to solve and the context around that. You do what's called feature engineering, where you look at different parts of the inputs that can be separated, teased out to where you're looking at the relationships between that data and seeing how we can find those patterns, those relationships and really be able to get the correct answer depending on what we're looking for. Then we take our data and in many cases we have the answers for an initial part of that data, where we know what the answer will be. We use that for our training so we can see did it get the answer right. We set a small amount on a side for testing to make sure that the model is trained in a general way but not over-specific to that data. Then we have passing through a particular algorithm, that's what actually makes the decision. So in terms of AI, what problems can it solve? At the broadest definition, AI can do two things very well. One, it can provide classification. Is this A or B or describe what this is? It can tell you a lot about the current state, what you're doing. That can be natural language processing, that can be photographs, that can be looking at data and saying what's correlated here, is this doing what I want it to do? Or AI can predict things. In a sense, it can see ahead into the future. It can use past and current data, but it can also look at subtle different behaviors and correlations and relationships with that data and then it can provide predictions that can depending on the model be extremely accurate and helpful. But you have to be able to set this up with a right kind and the right amount of data depending on what problem you're trying to solve. So what is AI good at today? Well, there's object detection. It's getting better and better at saying, hey, an object's here. So looking at things like navigation or mapping things out, it's very good at anomaly detection. And this is especially important in manufacturing where through different types of sensor, but especially just visual data through video and photographs. It's able to see, hey, this doesn't look right. I think there's a problem. And there's also process and safety monitoring and prediction. So it can look and say, hey, this person isn't wearing their safety gear. It can say this situation isn't correct or I think there's a risk that something can go wrong. So a quick example of this of what AI can do especially when you put together different types of AI to solve a bigger, more complex problem. So let's look at supply chain. Today we have a large organization in this case in the US. And we have many different suppliers all over the place. And it's very, very complex and difficult. How do we keep track of it? How do we make sure that we are taking the right action in a given moment? Well, we can take a lot of that data and we can take past data and we can provide supervised regression. So we know, hey, there's very high probability that something is going to happen. We need to take action if we don't want that to happen. Then you might have what's called an expert system where we have different types of data put together with different decision trees or if then type rules where we can see, hey, based off of this information, this is probably what's going on. And here's the core data that tells me why that is. And here's the probable action that I need to take. And you might have documents that feed in that data that can be scanned in a more loose way to be able to find correlations, pull out different names of people, different locations. And that can help track down maybe who do I need to talk to? So if I see that there's a problem, I identify the root cause of that problem. I need to know then what action I need to take. All of these things can help get me to the person I need to talk to very, very quickly. So where does AI fall short? Well, in a lot of cases, AI is statistically impressive but individually unreliable. And sometimes that's fine because we need a mostly good solution, but sometimes it's not fine unless we have as close as possible to 100%. Especially things like safety, precision manufacturing. So we see that also with AI the cost of failure is measured in a way that even if we know statistically the AI is better than a human doing the job, it could have worse implications if the AI fails, even if it fails less the human does. So where's AI unpredictable? Well, they're different case studies that have been done, but AI has been able to find ways to solve the problem that are not obvious at all to humans. There was an AI hide and seek game that was put together by different bots. And what they found and it's a fascinating greed, what they found was the AI bots learn to break the laws of physics in the game. They learn that if they hit a certain corner of the arena just right, they could fly into the air and fly over walls. Another found that it could actually climb on a box and break the physics engine and move around on the box in order to accomplish the goal. So that's scary if you need it to work the way that you expect, but also very exciting if you want the AI to discover new and innovative techniques. So what does this mean from manufacturing? AI is already being successfully deployed in manufacturing in many different ways. They can do amazing things, but it all comes down to knowing what problem you're solving, making sure you have the right data, feeding it through, training a model, and then deploying it correctly. It's not ready to perform all task, at least yet, at some point most other problems will be solved with AI. It's just a matter of time and the amount of effort that we are willing to put into that. It'll always be somewhat unpredictable in a sense so are we. So we have to keep that in mind and prepare accordingly to make sure that we really are solving the right problems with the right risk profiles in mind. If you have any questions, please use the course chat discussion and I'll see you in the next video." "Intelligent and Integrated Energy Systems- Courses for Professionals.","https://www.youtube.com/watch?v=rk9DgPItLgw","Our daily lives revolve around energy, secure the supply, sustainability and the affordability of our energy systems are essential. To achieve these goals, we have to integrate renewable energy, heat, gas, hydrogen, and electric vehicles into one unified system in a smart way. Simulations, machine learning and blockchain can make planning and operations of energy systems, more intelligence and more efficient. The rise, decisions and policies will support this transition and will motivate innovations and investments of businesses. The Intelligent Integrated Energy Systems Program provides a unique window into the energy transition by tackling it from four different perspectives, technology, digitalization, policy and governance and sustainable business innovations. This series of massive online open courses is created by leading researchers from the Delft University of Technology. And Rotodown School of Management. You will learn about the latest developments in the energy sector. Distinguished speakers from established companies like IBM, DNV, Santa Cac, Tannet, and many more will share their insights on how these innovations are applied in their real world. You will discover what is needed to design and operate an intelligent integrated energy system. We will equip you with the right-scale set and tools to advance your career, your company, and to broaden your mindset in one of the fastest growing shop sectors out there." "Façade Design and Engineering: Complexity Made Simple. MOOC Trailer","https://www.youtube.com/watch?v=d6Bsc_IvVSg","Let's go. Incredible sight and even more beautiful building. The sheels are a big long-nodden. This is what you can see. I mean you see the archers up there. And here you see the Venetian blood. So this is the cladding metal. Let's see the sheet. The glass, the the mario is there. The fray. What is the main complexity of first-stepping bricks? When I see him like this, he's okay with it. We don't need to take the 15th take. We're going to have to run. Hi. Look." "Rotor Aerodynamics – Urban Air Mobility Online Course (Sample Video Lecture)","https://www.youtube.com/watch?v=ZCoDdea9YIg","Welcome to this video on rotor rotor interaction. In the previous videos, we have considered the aerodynamics and performance of single rotors. However, UNVGo's typical rely on multiple rotors to provide the required thrust. These rotors will interact with each other. In this video, we will consider these interactions. First, we will briefly discuss the characteristics of the rotor stream tube. Then, we will treat the aerodynamics of rotor rotor interactions, and their effect on the rotor performance. The interactions between different rotors are the consequence of the rotor induced changes to the flow field. The rotor thrust can be considered as a pressure jump across the rotor disc, which increases the total pressure downstream in the slipstream of the rotor. The pressure jump causes an increase in actual velocity in the stream tube. The static pressure upstream of the rotor is decreased, while downstream it is higher than that of the free stream. The increase in velocity induced by the rotor causes a contraction of the stream tube. Because of the rotor torque, there is also a non-zero tangential velocity component in the slipstream. Finally, the slipstream contains the rotor blade wigs and tip forduses. The contours of the testi shown in the top right of the slide highlight the time dependency of the flow field in the rotor slipstream. For UAM vehicles with multiple rotors, the stream tubes of the different rotors will interact with each other. The resulting rotor rotor interaction phenomena can be grouped into two categories. For the lateral interaction case, the rotors are adjacent to each other. For the actual interaction case, the rotors are separated in the streamwise direction. The interaction mechanisms for both cases are different, so let's discuss them separately. For the lateral interaction case, the stream tubes of adjacent rotors affect each other. This has a minor impact on the time efforts rotor performance at zero degree angle attack. A significant reduction in performance only occurs when the tips facing between the adjacent rotors is very small. In actual inflow, the rotor efficiency decreases by at most about 1%. In the over condition, the maximum loss in efficiency is sum of larger at up to 3%. The efficiency loss rapidly decreases upon increasing the spacing between the rotors. At higher angles of attack, for example, for vertical rotors and crews, the interaction becomes much more relevant. In that case, the second rotor is in the download from the first rotor, leading to an increased power consumption of about 5% to 15% for a given thrust. Besides the time average effect, there is also an unsteady effect. The perturbation of the inflow near the blade tips causes unsteady blade loading. Discoises vibrations and modifies the amplitude and directivity of the rotor noise emissions. The interaction effects are a function of the difference in face angle between the passing blades of the adjacent rotors. By controlling this face difference, the interaction can therefore be modified. This technique is called Syncrofacing. In this video, we see three adjacent rotors at constant rotational speed illuminated with a strobe scope. Without synchronizing, the relative phase of the blades is random. With synchronizing enabled, the relative phase of the blades is controlled. In the first example, the blades always meet in the horizontal plane. In the second example, the relative phase of set is modified to adjust the aerodynamic and aracoustic performance of the rotor. The synchronizing technique can be an effective control strategy to manage adverse unsteady effects of lateral rotor rotor interactions. For the actual interaction case, the dominant interaction is caused by the operation of a downstream rotor in the slipstream of an upstream rotor. The downstream rotor, experience a perturbed inflow characterized by increased actual velocity, non-zero tangential velocity, and vertical flow structures. Contrary to the lateral interaction case, for the actual interaction case, the inflow perturbation has a significant impact on the time averaged rotor performance. The downstream rotor can suffer an increase in power consumption of up to 30% for a given thrust, depending on its location with respect to the upstream rotor. The insulation penalty increases with increasing overlap between the rotors. Besides the modification of the time average loading, the interaction will also lead to unsteady blade buds. The downstream rotor will experience a periodic excitation due to the blade weights and teaportices from the upstream rotor. This is especially relevant for noise and vibrations. In this video, we have discussed the aerodynamics of interacting rows. You have learned about rotors' screen-to-characteristics and rotor-rotor interactions. In the next video, we will discuss the impact of the rotors on the airframe." "Introduction to UAM – Urban Air Mobility Online Course (Sample Video Lecture)","https://www.youtube.com/watch?v=_J6FwRLnJYo","Welcome to this new course on Urban Air Mobility. My name is Fulvios Carano, I'm a professor of aerodynamics and faculty of aerospace engineering and the Health University of Technology. Let me guide you through a journey to the future of air mobility. This prophet course will advance your general and technical knowledge about personal air mobility. We hope you are enthusiastic about this topic as we are and we learn a lot over the next few weeks. The first question you might be asking yourself is, when we talk about urban air mobility, what do we actually mean? concretely. If you search on Wikipedia about what UAM or urban air mobility means, you will read that it concerns urban transportation systems that move people by a year. And that such systems are developed as a response to the increasing traffic congestion. If you consult the website of the European Union project on smart cities, you will read that the main goal of urban air mobility is to make use of the air space as the third dimension by flying vehicles. This is supposed to surface and underground transportation systems currently taking charge of most of the urban mobility. Similar terms are used for personal air mobility and personal air vehicles with meanings similar to urban air mobility. Recently, the term advanced air mobility has also been coming in use. Today's cities and especially metropolis develop along a complex urban pattern. They are multi-layered from underground to the road level and further with bridges and biotops. Imaging not having the valuable assistance of Google Maps, you would easily lose orientation like you would inside the laboring. As keeping it, would require a global view of the city which is possible only from above it. This approach has been imagined more than 2,000 years ago with the meat of Eichers and Dettelus, escaping from the minotao laboring, flying away from it with walks made wings. So the concept of freedom through a third dimension was already envisioned at that time. Why is the topic of urban air mobility growing so rapidly? Let us look together at some demography statistics. The percentage of word population that lives in the city keeps increasing. Take the more developed countries. Cities in 2007 already accounted for 3-4 of their total population. In 2030, the percentage will increase to 80%. In less developed countries, like for instance on the African continent, less people live in cities but urbanization is growing even more steeply there. With such a rate of urbanization, the mobility in large metropolitan regions faces important challenges. Traffic capacity is among the most recognized problem in metropolis. The need for mobility exceeds the road and underground capacity on a daily basis as a result transport saturates during peak hours with the drop in mobility. Maintaining a good traveling speed for cars makes the urban environment particularly dangerous, especially when road traffic interacts with cyclists and pedestrians. Safety remains a challenging urban city mobility. In turn, combustion engines are responsible for aggravating air pollution in large cities. The limits suggested by the public health organization are frequently exceeded. This is a clear signal that a sustainable development of urbanization cannot be based on the massive use of cars, certainly not with internal combustion. Last, but not least, the large amount of people in need of transport also involves labor. Autonomous vehicles are nowadays appearing that replace the pilot by an intelligent system. The very diverse events occurring during surface mobility posts incredible challenges to the safe deployment of auto pilots. The use of airspace above densely populated areas is already routinely taken up by helicopters. Emergency services like ambulance and police intervention make use of helicopters in big metropolis like in New York and South Paulo. In addition, the deployment of vehicles that fly electrically, yet unmanned, has begun with drone services for reconnaissance, package delivery, and recently, Delft University of Technology has demonstrated the use of drone ambulance to transport first aid equipment where an accident has occurred. Urban air mobility can be split into three domains. First, the vehicle that performs the flight transport. For this domain, you need to know about flight controls, aerodynamics, propulsion, air acoustics, structures and materials, etc. Second, the specific mission accomplished by the flight. For this domain, you can imagine a variety of missions. The most obvious ones are air taxi flights or air ambulance flights, but the services that can be deployed will probably expand as soon as the UAM become operational. Third, different from conventional aviation that connects from one airport to the other, UAM vehicles connect to verticals that need to be integrated in the urban or sub-urban environment. Air traffic management is a crucial aspect for UAM. The topic that you see typed in black are actually dealt with in the course. Those in blue are relevant to, but they are not expanded on in the course. Nevertheless, they are partly covered with additional reading material. Let's look at UAM vehicles in more details now. I will give you a first classification of UAM vehicles by their propulsion configuration. This classification follows the definitions proposed recently by Rosin and Busing from NLR. A first-class vehicle is the multi-rotor class. This configuration sees a large number of small rotors with fixed vertical axis which provide the lift force and can be controlled for vehicle stability. In essence, these vehicles can be regarded as an upscale version of the drones that can be nowadays purchased in a shop. Given their simple architecture, it does not surprise that these systems are the most developed stage among all categories. For instance, flights with people onboard are now performed routinely with these vehicles. The Volocopter is a multi-rotor with propellers above the fuselage. The rotors are connected through a network structure. It uses electrical engine powered on batteries and the vehicle is piloted. The HING 184 comprises four sets of control-rotating propellers installed at level below the fuselage. The rotors are also moved by electric motors and powered with lithium batteries. The vehicle has no pilot and the flight is controlled by ground station. A second-class of vehicles is the dual phase class. In this case, the vertical flight phase is covered with vertical axis rotors. These vehicles are featured with a fixed wing that is able to produce some lift when in forward flight. The thrust for the forward flight is produced by an additional rotor with horizontal axis. The Kitty Hawk Corab is similar to a small aircraft with limited wing span and with a high boom tail. 12 small rotors provide a lift needed for vertical takeoff and landing. One single large propeller is mounted behind the fuselage, providing trust. The vehicle is intended to fly autonomously. The jump-roza makes use of a single large rotor similar to a helicopter ahead of the fuselage. This rotor is powered during vertical flight. Four large electric props provide the thrust for horizontal flight, where the lift is then given with the top rotor in auto-guidant. The polv is a vehicle at an advanced stage of development. It relies on a propeller for thrust and the auto-guidant becomes active in forward flight. The vehicle requires a short track for takeoff. It is powered by an icy engine and is piloted. Thanks to some efficient lift generation by wings or auto-guidant dual phase vehicles provide a longer range than multi-rotor vehicles. The third and more sophisticated category of vehicles is that of the tilt rotors. These vehicles make use of the same propulsion systems for vertical as well as forward flight. This is achieved by placing the rotors with vertical axis at takeoff and landing. Then the vehicle will gently transition towards forward flight by rotating or tilting the rotors to produce the necessary thrust. During forward flight, a large portion of lift is given by multiple wings. The Bahana concept from Airbus features eight electric props mounted ahead of two staggered wings. The vehicle is powered on batteries and is self-piloted. The Lillium jet makes use of 36 compact, ducted funds arranged on a larger rear wing and a smaller front wing. The Lillium jet is powered on batteries and is intended to be piloted. Tilt rotors potentially offer further range than dual phase vehicles as they do not feature separate propulsion units for vertical and forward flight. Moreover, the shape of tilt rotors can be optimized for better aerodynamic performance. Tilt rotors, however, are more complex to control and development is not as advanced yet as that of multi-rotors. We can now introduce a vehicle classification based on the level of satisfaction of requirements. Such requirements are mostly related to performance. Some of them are typical requirements used in aeronautics, like a range, speed and payload. Others, like versatility and size are specifically relevant for the use in the urban environment. Finally, the power requirement is important in relation to the footprint of the vehicle in terms of gas and noise emissions. I will end this lecture with an outline of the course. The first module will continue with a more detailed approach to vehicle classification and elements of eco-friendly vehicle design. The second module goes more into depth. It is split into two parts, pat A covers aerodynamics and acoustics of propellers as well as power systems. Pat B covers controlling stability, air traffic management and elements of urban integration. You'll be able to pick the pat B1 to focus on this course. This concludes this lecture. Thank you for watching and enjoy the course." "Digitalization of Intelligent and Integrated Energy Systems IIES02x - Promotion Video","https://www.youtube.com/watch?v=S-asFpvKVcU","Making the energy system sustainable, affordable, available and secure is the prime focus of the energy transition. The transition is not easy. We need to incorporate renewable energy, electric transport, heat and gas into our energy system. Meanwhile, the world is continuously changing. Many people have solar panels installed on the roofs, and we are switching to electric vehicles and heat pumps, but we didn't know that both can function as storage. Allow a new storage energy cheaply for the lia use. Can't we let everything cooperate somehow? Why do we hear more and more about cyber attacks and blackouts, energy shortages and capacity problems in the energy system nowadays? And how do we leverage digital technology to tackle such challenges? If you are eager to discover the answers to these questions and you're in the right place, this course, the digital transformation of intelligent and integrated energy systems, will provide you with insights to overcome these challenges. In this course, you will learn about the opportunities of a digital grid and the virus digital technologies to achieve this. You will see how artificial intelligence and machine learning can make the great operations more efficient and more autonomous. Transforming human operators into supervise us. We discuss numerical simulators, virtual system models and digital twins, all helping in testing the effects of these digital technologies on the real world, which eventually leads to better design choices. However, when these elements come together, they create one integrated and intelligent system that might be susceptible to cyber attacks. You will learn where the grid might get vulnerable and how to protect it from cyber attacks. We are very honored even to have distinguished guest speakers share their industry perspective so that you can see how all these comes to life in the real world. They are from DNV, Spectral, Tenet, Centrican and RT. This course is for a vast range of audiences from the energy field. But this course, you will be able to advance your career or your company if you, for instance, work in great operations, power systems design and planning, or power systems control. Also, cyber security consultants, software developers, artificial intelligence scientists, project managers, policymakers and strategic planners will benefit significant from this course. In fact, anyone interested in the digital transformation of the energy system should follow this course and gain insight from it. If you want to discover more, sign up now and be part of this exciting future." "Water Works: Activating Heritage for Sustainable Development MOOC Intro","https://www.youtube.com/watch?v=XVrjix_5R1I","Our lives depend on water. We need it to sustain ourselves, for culture and agriculture, recreation and transport, even for defense. But water is also a threat. Clouds, tsunamis and rainstorms can kill people and destroy their livelihoods. Water is always connected. Rivers flow to the sea, lakes evaporate, clouds produce rain or snow. Their flows connect to human-made systems for drinking water, irrigation and sewage. People around the world have responded to the complex challenging of water for life on earth. They have put systems in place to collect, contain and move water around in ways that are often embedded in social and cultural systems. We make sense of us surroundings by giving our cities the names of rivers or mermaids, or a name that emphasizes the location near the sea. Our awareness of the importance of water for human life has decreased over time. In recent centuries, we've used ever more powerful tools to why they're exploiting or resist water. But as we have built to avoid floods, we have lost the opportunity to live with water. For example, by capturing sediments that enrich the soil. Historic local water management systems can inspire us to find ways of meeting contemporary challenges, related to climate change, industrial pollution, overuse of water for agriculture, farming, or mining. Understanding how people through the ages have valued water helps us realize that to live with water, you have to respect it and use it wisely. Co-creation by experts in the field of water management and heritage, as well as engagement of diverse communities, can lead to stronger designs and more sustainable solutions. To live with water, is to ask time and again on many different levels how water works. Thank you for watching." "MOOC Railway Engineering: An Integral Approach - Introduction Video","https://www.youtube.com/watch?v=ZFLlye4sJcQ","Look at how universe. An amazing assortment of incredible objects isn't it? An enormous system in which everything is connected. You could call it a miracle. Take this guy. He overslept my 35 minutes, ran 531 steps, while drinking 68 milliliters of coffee with only 47 seconds left to catch his train. It's almost a miracle that he made it on time. Little does he know that the train approaching him is anything but a miracle. It's on time too. Thanks to the commitment of people who made the switch, I'm thinking of miracles to making them happen. Creating a sophisticated safe real system and that's been operating since the 1830s following the same principle. Tracks, wheels, motion. That story continues today, connecting cities, countries, continents and people all over the world. Imagine the importance of a proper wheel rail interface, the value of solid catanaries and pedographs, the rare and tear of the material and the significance of proper maintenance. It's engineering that keeps things going, helping us to stay safe. With right heads connecting these dots, connecting us. So we can find our ways to work, our friends, our homes, or completely new destinations. Without noticing, it's all part of one of the largest, most innovative systems in the world, where everything is connected, like in our universe. Have that best part is. It isn't a miracle. It's engineering. You never realize that, did you? TU Delft offers you the chance to get acquainted with the complex challenges of railway engineering and operations during an exciting move. Join us and get connected too." "Critical Raw Materials: Managing Resources for a Sustainable Future (Introduction Video)","https://www.youtube.com/watch?v=KuUz5zVZwrQ","In the coming spring decades, global GDP will more than double. There are risks of shortage of that material supply. But the exploration geologist Haystack is the entire planet. In enormous areas of length are required by mining activities. And in the case of critical raw materials, it can be particularly challenging. But thanks could change. This is particularly the case of kova. We try as much as possible to map out all of the emissions and their potential impacts across the full life side. Long term solution that we have to work hard on." "Global Housing Design - Online Course (Final Week)","https://www.youtube.com/watch?v=y1201lFEaEw","All of the stone sets cream in the pickest ace for the game." "Pre-University Physics Introduction Video","https://www.youtube.com/watch?v=2-ihA8gl46c","Why does a guitar sound different from a violin or a piano? How is it possible that you can bike at different speeds with the same muscle power? How can you use electromagnetic waves to transmit information? Physics plays an essential role, not only in everyday life phenomena, but also in almost all advanced technology. Hi, my name is Sander Oto, I'm a professor in this university. Apart from doing research in quantum mechanics, I've also been teaching physics for the last 10 years. Probably understanding physics is crucial for most university programs in engineering. In order to use physics beyond high school level, you will need to approach it in a more abstract and mathematical way. For this reason, the first few weeks in university can be a bit overwhelming. Things that you thought you understood very well are suddenly presented in a much more formal language. Three university physics is an online free compact course in which we will cover all the physics that you need to get started in university. The course consists of three modules, mechanics, electricity and magnetism and waves. In those modules, we not only cover theory, but also everyday examples and exercises that are worked out by enthusiastic students that were in your shoes only a few years ago. Physics is beautiful and lots of fun, but it takes some effort before you can really appreciate it. Three university physics will give you the best possible preparation for your technical studies." "Global Housing Design - Online Course (MOOC) - Introduction","https://www.youtube.com/watch?v=ZH8Ncj3Zlgg","I think we have a very different notion of house, place, dwelling and life. As an architect, we talk about space as an object, but space is maybe an extension of our activities and slowly begins to transform itself through addictions and there it becomes a steep and a house and a cluster in the town." "Advanced Credit Risk Management - Online Course (Introduction Video)","https://www.youtube.com/watch?v=wkAaHdboJts","Are you a credit risk professional? Hard isn't it? New regulations and business challenges make credit risk a more demanding field now more than ever before. You have to be right on top of your game all the time. Hi, I'm Dr. Pascolechirilu. My research is devoted to risk modeling and management and create risk is one of my favorite subjects. In particular, I like to investigate how changes in regulations from Basel to IFRS influence the way in which credit risk is assessed and hedged. I work closely with the financial services industry. Before, I understand your challenges and your needs. After making my MOOC an introduction to credit risk management, I would like to now offer you the credit risk professional, a course that will help you to gain and maintain the professional edge. How are we going to do this? Let's dig a little deeper. We will have a look at what lies behind the models and the formulas you use on a daily basis. We will discuss together how the regulatory changes influence the way in which you assess credit risk. How you hedged and how you prioritize your exposures. We will combine theory and practice and dive into engaging and exciting discussions together we will touch upon the issues you face every day. I invite you and other risk professionals to join me on this journey. We have all the ingredients to cook a very nice recipe together. After all, taking risks can also be fun. Thank you for watching." "Circular Building Products Course at TU Delft","https://www.youtube.com/watch?v=9pYPgohL5uc","The building industry has a tremendous hunger for new materials. At the same time, it was possible for 35% of all the waste created. This linear approach is the reason for massive loss of ecological and economic value. Making willing products circular as a challenging task, the questions are traditional way of working. But ultimately, it will lead to better products improve the ecological value and new business opportunities. As you all know, designing and producing secular building products requires a massive systemic change. But we can make this change happen. In fact, for some companies, it is already underway. In this course, there are people who will show us how they make the transition. The different waste of garbage, which is designing with waste is one of them. New building products like window frames are referring the market made from secondary material that has been harvested from the existing obsolete stock. The even companies who take it to the more extreme, how can we design products without any waste? And how do we make sure that circular products can be used and reused indefinitely? The changes in design and primary materials affect companies' most preparation. Companies must constantly come up with new management schemes and vision models in order to support their activities. Governments and policy makers must develop new policies in order to incentivize companies to follow this transition. Combining these ideas with feasible business models is a key to the test. Here at Steordolfed, we are dedicated to promoting the transition to a circular economy. This course offers professionals and in-depth understanding of designing, engineering, manufacturing and marketing circular building products. Through a series of lectures, interviews and personalized assignments, we can help you navigate the complex circular economy landscape. So, join us, work with us in turning your building product into a circular building product." "Extension School Unicamp Event 26 February 2021","https://www.youtube.com/watch?v=S8Lq5ix_rKk","The Extension School of Delptu University of Technology is dedicated to developing and delivering high-quality continuing education, whilst also continuing to improve the quality of campus education. We want to equip people to solve today's global challenges, and while I alone has include many from students and researchers, we especially support professionals to upskill of cells in important technological and engineering developments. Our mission is clear to make a positive impact on education, the lives of learners, and the world at large. We remain at the forefront of open and online education. Reaching hundreds of thousands of people around the world, we help them gain access to higher education in a flexible, effective and more affordable way, even translating our courses into other languages to increase accessibility. Our portfolio focuses on the themes of great relevance to society. Delivering them at a variety of formats, and through a growing number of short programs. What we do we do for others, and we are proud of our success, of what the numbers signify. And of the global recognition we receive. But what we are most happy about is the impact we make, not only on individuals' lives, but also in terms of the numerous institutions that freely reuse our open licensed materials, and our educational research. Our own lecturers and campus students benefit from all the reuse of online resources, and teaching methods, and from the innovative practices and tools. Learn our evaluations and testimonials speak for themselves. Our pedagogical model is key to our success, ensuring that learners' needs are the center of the learning experience. It is adaptive, research-based, tested, and effective. And we make it publicly available. We share and grow together with our external partners, who are as important to us as internal ones. Networking and collaborating with universities and industry help us give learners knowledge that is highly applicable. Life-long learning and contributing to education are gaining even more prominence at national and international level. And we believe this opens up even more opportunities to share experiences and build collaborations. We look forward to keep developing online education for and of the future. Let's talk." "Urban Air Mobility – Online Course (Introduction Video)","https://www.youtube.com/watch?v=XBuP-QhEgB0","Road traffic congestion is one of the biggest problem in our modern society. In Europe, for example, four intense citizens experience daily lengthy commuting problems. Together with high-cable consumption, nice problems, carbon emissions will create high costs. Everything is the time to change this situation. It can be that we have a solution for all these problems. Carbon air mobility is a vision taking shape at the moment. It seems determined to transform our cities in the cities of the future. Today we can speak of the most exciting time for the aviation, taking us back to the drawing tables, redesigning almost everything what we knew about conventional aviation. To this end, Delft University of Technology in the Netherlands has formed a group of experts of urban air mobility guided by the faculty of the Earth by St. Canary. They are ready to take you into this new journey for achieving the golden age of aviation. This course is first of its time, as it helps in to various aspects covering the brother-rizen of urban air mobility. You will get the births and review of the engineering and technology that goes into building and operating an air vehicle, like for instance an air taxi. In this course, you will learn multi-disciplinary concepts of designing a urban air vehicle, ranging from aerodynamics and aerocoustics of propellers. Propulsion systems of air taxi. Light control and stability. And finally, the integration in the urban environment. Moreover, we also offer you virtual outdoors, where you will be exposed to set of the art experimental facidity of the U.D. and follow interviews with industry professionals. Urban air mobility is a rapidly growing field, currently attracting the interest of the big players in the aviation sector, such as Airbus and Boeing, and of emerging groups like Lidium and Ihang. This professional course offered by the TU Delft targets graduates approaching companies and engineers working already in the aviation industry. This is a wonderful opportunity to enrich your knowledge in the growing field of urban air mobility. Is this catching your interest? Find out more on our website." "Energy Slaves. Intro to courses: Zero-Energy Design and Advanced Zero-Energy Design","https://www.youtube.com/watch?v=kFOOAzpwOBY","you That's the easiest for me as my boss. The spotting is lovely. impressive. This is a typical Dutch couple in a typical Dutch dwelling. Everything is quiet and you would expect that they don't use any energy but while these two hours sleep the house still requires 200 watts of power. Power is the amount of energy on users or producers during a certain period. Unit, jule per second, or in other words, what? System of what the bay? I'm not going to be the master. Just stop it. Okay, number two, come on bay. I'm coming home. I'm going to do something. I'll do something for you. There was 3, 4 and 5 to buy. It was 300 watts. Making coffee, 900 watts. She keeps the hot plate on, the demand will remain 900 watts. Energy is power times the period of time that this power is delivered. Unit, jule, what hour or calorie? Leave your coffee machine on for 20 minutes and you will refuse 300 watts. Fibonet bay, so on 5 to the bottom. Pro-Messic, what? To sing bread, one kilowatt. But he can even get worse. 10, 11. Who is this over the weekend? I'm the swach. Tell us 5 to the bottom. Help! The energy required for heat is immense. When the heating is turned up, the boiler switches on and then demands at least 2.5 kilowatt. The the speed is free. The water system is crystal clear. The data can be seen in a variable, when the gas drops of the dollar. The weather is weird. They don't look like a criminal. Now the ladder will settle enough... And cement is annoying... I will keep up the battery. Showering is one of the most energy- squandering activities at home. The tank of a boiler has a certain stock of hot water, but this needs to be heated up when it is emptied and cold water is entering the tank. When a boiler switches on, because someone is showering, approximately 10 kilowatts of power is demanded. In other words, 10,000 watts. This means 30 rows each delivering 333 watts, and insane amount of power for just the shower. Chances! We're not going to mess on each other! Be sure to be there! Stop! Stop! Oh no! We're dating! Six minutes of showering, and you've already used one kilowatt hour. 12 minutes, two kilowatt hours. Half an hour, five kilowatt hours. One kilowatt hour! The shower is ready. All right, let's go. You're still going to the toilet, and you're going to the toilet, stop! Stop! Go! Stop! She's locked. She's locked. Yeah, don't you? Good luck for our hours. We for eight hours only have to deliver the basic power of the house, until the couple returns. Unfortunately, today, the basic power is unnecessarily high, because the thermostat has not been switched down, but it can be done. Although? Without noticing, an invisible world is hiding in the house, and in the equipment we use. Everything we use at home costs energy. A Dutch household requires approximately 1,800 watts. Based on the power our rowers deliver, 250 watts on average, we would require 7 rows permanently. But this is on average. The peaks in usage can sometimes go up to 20 kilowatt's, that is almost impossible. Or is it? This is the same house, but with improvements, such as extra thermal insulation, a heat pump, and solar cells. In the open online course, zero energy design, T-udal will teach you how to liberate all rows from their slave labour. How? By making the house energy neutral." "Aerobic Granular Sludge (AGS) Technology for Wastewater Treatment - Online Course Introduction","https://www.youtube.com/watch?v=Yhh4aetge6A","The treatment of wastewater has been revolutionised in the last decade by the introduction of aerobic granular sludge, AGS technology. It's a technology that brings significant benefits, reduction in footprint, lower investment costs, less energy use and chemical consumption. And also delivers improved environmental compliance and robustness compared with traditional wastewater treatment technologies. In this TU Delft course, you will discover how AGS technology works, understand its underlying processes, and learn how to implement AGS. You'll make a VR-visit to a functioning AGS treatment plant. You will hear about the latest developments from the inventors themselves. You will discover how to make operational calculations and design a treatment plant. Benefit from the experience of operators, policymakers and innovators from water authorities. All this, and you get a professional education certificate at the same time. The course is 100% online, study at the time and place that suits you. Sign up now." "Energy Slaves. Zero-Energy Design: an approach to make your building sustainable. #zeroenergy","https://www.youtube.com/watch?v=Jo5W6y0_Tkk","Music Sporting, lovely. It's makes you stronger fit. You sweat out your laziness and you can lose your energy. and the power of the needs impressive. This is a typical Dutch couple in a typical Dutch dwelling. Everything is quiet and you would expect that they don't use any energy, but while these two are asleep, the house still requires 200 watts of power. Power is the amount of energy on uses or produces during a certain period. You need to do it for a second. Or in other words, what? You said what to buy? I don't know if you'll buy anything. Just a little bit. Okay, let's go. I'll go for it. Come on, you're welcome. I'll take you to the house. There was 3-4, 5-2-3, what? Making coffee, 900 watts. If you keep the hot plate on, the demand will remain 900 watts. Energy is power times the period of time that this power is delivered. Unit? You'll get what? You'll get what? If you keep the hot plate on, the demand will remain 900 watts. Energy is power times the period of time that this power is delivered. Unit? You'll get what? Or calorie? Leave your coffee machine on for 20 minutes, and you will refuse 300 watt hours. People the way! It's all five to five! Provements, it's cool! Toasty bread? One kilowatt. But it can even get worse. It's not that bad. It's not that bad. It's the same. It's the same. It's the same. The energy required for heat is immense. When the heating is turned up, the boiler switches on, and then demands at least 2.5 kilowatt. We're straight up to the 9. On the flat, the light! It's limboals, Luna. Wanna do that, Robinson... All wenngiale check... Join us to see the traffic! One of the most energy-scondering activities at home. The tank of a boiler has a certain stock of hot water, but this needs to be heated up when it is emptied and cold water is entering the tank. When a boiler switches on, because someone is showering, approximately 10 kilowatts of power is demanded. In other words, 10,000 watts. This means 30 rows each delivering 333 watts, and insane amount of power for just a shower. Look at the metal Foundation and go for it. Reserv speaker. Let's get started! My manager! 2,3! 6 minutes of showering and you have already used 1 kilowatt hour. 12 minutes, 2 kilowatt hours. Half an hour, 5 kilowatt hours. 2 kilowatt hours. You're not going to do this. I'm going to do this. You're going to do this, you're going to do this. Stop! Stop! Stop! No! G-starctsusus. Yeah, thank you. Good luck for our hours. We for 8 hours only have to deliver the basic power of the house until the couple returns. Unfortunately, today the basic power is unnecessarily high because the thermostat has not been switched down, but it can be done. Although? Without noticing, an invisible world is hiding in the house and in the equipment we use. Everything we use at home costs energy. A Dutch household requires approximately 1,800 watts. Based on the power our rowers deliver 250 watts on average, we would require 7 rows permanently. But this is on average. The peaks in usage can sometimes go up to 20 kilowatt hours. That is almost impossible. Or is it? This is the same house, but with improvements, such as extra thermal insulation, a heat pump and solar cell. In the open online course zero energy design, T-udal will teach you how to liberate all rows from the slave labour. How? By making the house energy neutral." "#Circular Building Products Online Professional Course- Lecture Sample","https://www.youtube.com/watch?v=2pkYoREBvtI","Construction and demolition projects are responsible for about 40% emissions and about a third of the total waste in the EU with a significant share being landfill. So clarity is a way to better manage resources and reduce greenhouse gas emissions. Transitioning to a circular economy is crucial and requires us to revisit all processes related to product design and development. The circular economy approach developed by the LMAC Arthur Foundation sets a starting point for the reuse of technical and biological materials. The key issue here is that circular economy innovation can deliver both, environmental, as well as economic benefits. To do that, a series of changes are necessary, not only to building product development, but also to the business models and to society as a whole. As we will find out from our guests in this course, new top down and bottom up governance schemes are required to ensure that companies will be efficiently incentivized towards adapting circular economy practices. So clarity in the built environment depends on establishing co-creation processes through all scales of materials, products, buildings and cities. And a transdisciplinary approach is required to tackle the complex environmental and economic issues that arise. In this course, you will discover the opportunities that circular building products can offer. You will investigate them from various perspectives, design and technology, business models and stakeholders, management and governance. This week we will learn more about products, the basic gradient of buildings and how they relate to circular performance. We will also discuss some of the most common evaluation methods that we currently employ to measure product circularity. In week two, we will discuss technology and design challenges for circular product development, with regard to the our strategies. In week three, we will scale up and discuss the vast number of stakeholders that are involved in a circular product life cycle. Several possible business models are discussed. Week four is dedicated to the frameworks that promote circular economy implementation. Circular building products need to be supported through top down governance as well as bottom up initiatives. Finally, in week five, we will evaluate the current practices and limitations and we will introduce you to the circular building product cameras. We will unravel the complexity of the task by introducing you to case studies. You will meet experts who can provide you with a theoretical framework and the tools to create new products and new business models. During each week, you will be invited to a number of complementary activities and assignments, so you can develop your knowledge and skills. This way, we will support you to design and create new circular building products and processes for your own organization. Enjoy the course." "Functional Programming for Big Data Processing- Sample Lecture (#BD Processing Systems) #FP","https://www.youtube.com/watch?v=tZC5yv0e4Q8","This week, we discussed big data processing systems. We looked into the origins of map reviews and had do and saw how they were inspired by function programming. We discussed how these systems use two primitives, mapping and aggregation to separate the parallelizable portion of the user code from the portion that has high data dependencies. We also found out how these systems achieve resilience, which is important because they were designed to run on commodity clusters where failures have to be considered the norm and not the exception, especially for long-running jobs. Hadouk was highly popular, but over time new applications emerged, which had different requirements and richer patterns of interaction. So we looked into Spark, how in-memory computation supports more iterative and interactive style of applications and how the emergence of topics like data science and machine learning field this development. We recognized how the programming model is much closer to functional programming, but as we will see, they are also other ways to interact with the system like through Spark SQL. We saw how the system is based on immutable data and the abstraction of RDDs, but also how the alignment with data science made alternative formats like data frames more popular. Your final task for this week is to set up an environment in which you can work with Spark. Because next week it will be all about programming the system and crunching some data." "Functional Programming for Big Data Processing- Sample Lecture (Intro Lambda Calculus)","https://www.youtube.com/watch?v=64i49tfa-ww","Functional programming has been around for decades, going back to the 1950s. Yet, it has recently become an extremely hot topic in the industry. The entire software infrastructure of companies like Twitter is written in functional programming languages. Even languages that were originally fully imperative have adopted ideas from functional programming. Why is this the case? Why this Renaissance of Functional programming? Well, one of the reasons is the Consistence and Eleganz that this style of programming allows. In this module, we will look into the origins and basics of functional programming, which is Lambda calculus. When you first see Lambda calculus, you might not believe that something so structurally simple can be powerful enough to put in a single word. How often enough to program in. Let's have a look at the building blocks of Lambda calculus. And while it might seem a little bit abstract at the first glance, they are with me, because you will see the mechanics soon. First, we have variables. This is your axe. This is your Y, your Z. A placeholder, which can be substituted for something else. Next is the abstraction. Now you see where the name is coming from. We write Lambdax.t. And by that, we actually mean an unnamed function of sx with a term t. In regular math or programming, you would possibly write f of x equals t. Just that here, we don't have to give the function a name like f. Also note that we have t in this definition, which means that it is recursive. The final element is the application. This might seem a bit odd to you, but by writing one term next to the other, we mean that the first term t on the left is in the shape of a function. And the second term on the right is what we want this function to be applied to. This particular notation, we cry a bit of practice to get used to, which we will do. But for now, this is it. This is all that is Lambdax calculus. The Lambda notation was invented by this gentleman, Alonso Church. He originally did this as a way to write algorithms in a formal way, concretely for the so-called and shidong's problem, which was brought up by lightlets and formulated by Herbert. So as simple as it seems, you might be surprised to hear that it's to incomplete, which essentially means that we can express and calculate any computable function with it. In other words, this calculus can do everything that you favor in programming language can do, at least from this standpoint of computability. So, this is the first time we can write a code in a language. So, this is the first time we can write a code in a language. So, this is the first time we can write a code in a language. So, this is the first time we can write a code in a language. So, this is the first time we can write a code in a language. So, this is the first time we can write a code in a language. So, this is the first time we can write a code in a language." "Aeroacoustics – Online Program (Introduction Video)","https://www.youtube.com/watch?v=ITW8-KzOP_E","Mechanical systems are everywhere and although they do a lot for us they can also produce a lot of noise. Here at the Delft University of Technology we are training the new generation of scientists and professionals to build sustainable and quiet machines. We are now ready to offer our knowledge to you the working professional. Training a proper balance between noise output and aerodynamic performance is not an easy task. There are lots of regulations about noise pollution which are enforced by government agencies around the world. Specializing in aerodynamics and aerocoustics is really challenging. That's why we offer these online course so that you can get ahead in these fields. In just a few weeks you will learn how to translate complex aerocoustic theories into practical design applications. Your creativity and experience will also help you develop innovative noise reduction strategies improving your career prospects. We start by reviewing the physical principles behind sound generation and what parameters influence noise production. This knowledge is then translated into practice using both exercises relevant to industry and online simulations. Throughout the course you will receive personal feedback from our international experts who will also discuss industrial regulations and equipment. Sounds interesting? Find out more on our website." "Acoustic Imaging – Aeroacoustics | Online Program (Sample Video Lecture)","https://www.youtube.com/watch?v=QXDbVeQfW6Y","Welcome to this video. Today we are going to discuss the basics of acoustic imaging and the uses and limitations of beamforming for arachostics. In previous lessons we learned about the use of single microphones, which can measure the sound pressure at one location. However, they are difficult to use in environments with high background noise and they cannot separate different sound sources. In case we use several microphones simultaneously in an array, we can sample the sound field at several locations. This allows for sound visualization and separation of sound sources, as well as the last reduction of the effects of background noise. However, these devices also have limitations as we will discuss later. The reason why we need microphone-arracing arachostics is because aerospace noise sources, such as aircraft are typically very complicated and immediate sound in different ways and in different locations. As we mentioned before, microphone arrays allow for acoustic imaging to isolate these sound sources. Let's discuss the basics of beamforming. In mind that we have one monopole sound source. We define a scangryte of potential sound sources, since we do not know the location of the sound source, a priori. Then the microphones of our array will record the sound signal of the source with different time delays. Using this time delays in a smart way, we can produce a beamforming output, which shows a maximum at the location of the sound source. If we consider a grid point with a sound source, we obtain a much lower value. We can already see that the end result has some limitations. Unlike the side lobes, spurious sources, and the beam width of the main loop, which limits our resolution. In my end now, we have an array with n microphones. We built an n by 1 vector p containing the Fourier transforms of the pressures of each microphone at the given frequency f. With this vector, we can define the cross-special matrix c, which contains the experimental information. We calculated as the ensemble average over time. We then define a scangryte of potential sound sources. Each grid point has a position vector. And we assume a sound propagation model, no money amountable, but not necessarily. For each grid point, we calculate the spec that signals that would be recorded by each microphone. If there was actually a real source on that location. We do so by using the so-called serenvectors, which are basically green functions. Here, i is the imaginary unit f is the frequency. Delta t is a time delay between each grid point and each microphone, and c is the speed of sound. This is just one of the many formulations for a serenvector, but many more can be found in literature. Similarly, we assess the match between the model pressures by the serenvectors and the actual recorded signals by the microphones. We do so by using the conventional beam-forming formula, which gives us the source out-to-power for each grid point. As you can see here, these are the cross-special matrix, and everything else depends on the propagation model we chose. Therefore, beam-forming can be seen as an exhausty search for all grid points, which provides a source map as an end result. As an example, imagine that we have a single sound source on our scan grid. A typical beam-forming source map will look like this. You can see that this method has some limitations, as we said before. First of all, there are some cell loops or spurious sources that could be confused with actual sources. This can be improved by using density populated arrays. Secondly, the main lobes beam-width limits the spatial resolution. Basically, the medium-owned distance at which two sound sources can be separated. This can be improved on the other hand by using large aperture arrays. Therefore, we need a compromised solution. For a given setup and number of microphones, we can optimize the positions of the microphones within the array in order to obtain the best result. In this example, we have 64 microphones rearranged for a lower side of level and a better spatial resolution. Another advantage of beam-forming is that incoherent background noise, such as wind noise or turbulent boundary layer noise, can be eliminated. Since this noise is mostly contribute to the main diagonal of the cross-spectoral matrix, we can improve the results by removing this diagonal. This is especially useful for close-section wind tunnels, where the microphones normally measure hydrodynamic pressure fluctuations of the wind tunnel boundary layer. In this example, we can see how the results improve when we remove the diagonal. Another consideration for beam-forming wind tunnels is the convection of the sound due to the presence of wind. Imagine an airfall emitting trill in edge noise inside the wind tunnel. If we do not take into account the moving medium, the source map is shifted in the stream-wise direction from the correct position. However, if we use a stream vector formulation that takes into account the Mach number of the flow, we obtain the correct position and sound levels. Another way to improve our results when using microphone arrays is to use advanced acoustic imaging methods. In this example, we are going to use the well-known method CleanSC. This technique starts with the source map of conventional beam-forming. It localizes its peak value and calculates a point source that would generate that peak value in that location. It then substract the contribution of that sound source from the source map and repeats the process iteratively by removing the following stronger sources and cleaning the source map. This method uses the fact that side loops are spatially coherent with a main loop. In this slide, we can see an example of the potential of CleanSC. Here we have a conventional beam-forming source map of an aircraft model in a closed-section window. By using CleanSC, we obtain much, much better results with virtually no side loops. We can even discover new sound sources along the leading edge that were hidden by cell loop before. Many other advanced methods exist in literature, mostly tailored for specific applications, but a detailed explanation of all of them is beyond the scope of this lesson. If you want to take more information on the topics discussed in this lesson, here are some recommended references. This concludes our video. I hope you enjoyed it and see you on the next time." "Interview – Aeroacoustics | Online Program (Sample Video Lecture)","https://www.youtube.com/watch?v=6Ssa7trdmBc","I have everybody. Welcome to this interview with Dr. Stefan Lonemans, responsible for acoustic at the Simmons-Kameza. Today we are going to talk a bit about the Air Force Chef Noise. If I fold the spezo paper, I can say that it looks like an airfoil or wind turbine blade. Could you please explain us which are the most important sources or noise for the wind turbine? Yes, so this airfoil you can see this as a blade section, a part of a blade and this blade is moving through the air. So you have an air flow going from the leading edge over the airfoil to the trailing edge. And the most important source of wind turbine noise is trailing edge noise. It's when the flow passes the trailing edge when it generates sound at the trailing edge. Another potential source of noise could be the inflow turbulence noise. When you have turbulence in the oncoming flow, it hits the leading edge of the airfoil and that could also generate sound of a wind turbine. But that depends on the environment around the wind turbine. If there's a lot of atmospheric turbulence. The third potential source could be a three-dimensional effect which is the tip of the blade. When you have a pressure difference, you could get a so-called tip for the flow going from the pressure side to the suction side and that can create also noise from the tip of the blade. Okay, interesting. So you say that the most important one is trailing edge noise. Is there any way to reduce noise? Yes, what do you ourselves? Yeah, there are different ways to reduce this trailing edge noise. One of them, the most obvious one is to reduce the flow speed because the sound depends very strongly on the flow speed and that can be done by reducing the RPM of the wind turbine. The second way you could reduce trailing edge noise is by changing the shape of the airfoil in such a way that you maintain the aerodynamic performance. But reduce the thickness of the turbulent boundary layer. Okay. That also reduces the sound. But the most important way to reduce trailing edge noise is by using blade addons, which can be retrofitted on a blade and the most used example of that is trailing edge serrations. And I brought one here. So this is what they look like. Okay. So it's a triangular teeth at the trailing edge, which are mounted like this. So we look like this. Okay. Of course, in reality, the blades are much bigger, but this is just to give an idea. And this reduces the sound. And can you explain as a bit briefly how do they work? Yes. So normal trailing edge noise if you have a straight trailing edge, then the turbulent sparsas, the trailing edge, at a straight angle. So there's a large discontinuity. Yeah. If you add the serrations, then the turbulence passes the trailing edge at an angle, at an oblique angle. And this reduces the efficiency of the acoustic radiation. Okay. Nice. And is there a way to improve somehow the actual trailing edge serrations that you should us? Yes. It's interesting you ask because I brought our latest noise reduction technology, which looks like this. So we still have the teeth here, but we also have finer comes in between the teeth. And what this concept is actually inspired by the silent flight of the owl, the bird. It can fly much quieter than other birds. And people think it's because of special features on the wings of an owl. And we try to mimic this natural feature by applying these combs in between the teeth. And that gives us an even larger noise reduction than with the standard serrations. Okay. Very nice. Interesting. So thank you. Thank you for being with us and for your time. It was really interesting to learn about this. You're welcome. Thanks again for watching us." "Sources of Noise – Aeroacoustics | Online Program (Sample Video Lecture)","https://www.youtube.com/watch?v=zsPiCqMXYRc","Welcome to this video. Today, we are going to discuss the basic sources of noise for mechanical systems. We have learned so far that sound is a pressure perturbation that is propagated in waveform. And then the concept of the light-eld sensor gives the opportunity to explore all possible ways to produce sources of sound. This can happen with the presence of two-burn stresses with non-emotropic effects or with viscous stresses. We have also seen the noise defined in terms of exposure levels. Exposure levels depend on the strength of the source and on its reasons to the receiver. And commonly assumed value for pain is typically 120 decibel. Although a continuous stronger exposure above 75 decibels is a really problematic. Aircraft noise predictions have been developing through the years by certifying the level of max exposure by imposing new reduction targets for the Nestle K. In particular, about 10, 15 standard B reduction is required in the Nestle K for all topologies of aircraft. These requires by the 2050 to have aircraft that are 65 percent quieter and with a lower exposure impact in the surrounding of an airport. While we could think the flying less and with less passengers who solve our problems, however, this also makes flying less profitable. And much more beneficial way is obtained by considering the noise requirements in the aerodynamic design phase. In particular, the A-K-R-A-D-A-D-Advisor report is trying to enforce a collaborative effort in the reduction of noise and fuel emission for the benefit of the entire population. Such an effort requires a net reduction of CO2 and OX and perceived noise of more than 50 percent per item. These requires knowing where the aircraft sources of noise are coming from. By recalling what we have learned in a flying machine is 5 important noise sources can be found. From the component side, we can find genoise, airframe noise, propeller noise, airicopter noise, sonic boom noise. Airframe noise includes the noise produced by all those flow structures which are generated by non-ideal aerodynamic surfaces. As the wings, the landing gears, fuselages, etc. In particular, every form of aerodynamic separation is more covered cavities in an aircraft that are source of non-negligible aerodynamic stresses. And our steady force variation does sources of noise. Sonic booms are particularly noisy since they are arrived by local non-ecentropic flow conditions. And they are known because of the peculiar macondal sensation, footprint of some aircraft, such as the one in the picture. Only military aircraft can nowadays fly a mac above one, over the oceans. And supersonic flights over the land are not allowed since sonic booms propagate quite significantly with the distance. propeller noise is instead generated by several components. The noise is characterized by the finer monics due to the rotation of the blade. In particular, the tonic component due to the periodic flow variation in just by the rotating blade force is called loading noise. While the parting effect of the shape through the medium is the reason of another monopol contribution, the thickness noise. The most known component of propeller noise is the tonal one. The risk characterized by the consecutive harmonics together with relatively lower background or broadband contribution. In these graphs, you can appreciate how strong the harmonics are with respect to the lower background. If more complex configuration are used, the helical vortex system or the separate blades can impinge in the downstream rotor, creating additional sources of noise. This results in a combination of different components that can change the relativity pattern of the entire system. Engine noise is instead characterized by its constituting components, including the fun, the compressor, the combustor, the turbine and exhaust. With current tendency to increase the size of the engine, while reducing the mass flow through it, the contribution of the fun grows with respect to the other ones. The most important thing is that the motor noise is that characterized by the interaction between the vertical structure that are generated by the blades and the supporting structures. In particular conditions, the blade can also interact with the waves from the previous ones, creating a particular phenomenon called BVI blade vortex interaction. The acoustic foot print to such a system is quite complex and rarely can be addressed without using the motor brains tools from computational fluid dynamics. Interesting task cases is represented by another blade machine, the wind turbine. In particular, although the correlation between noise is portion and alpha-fet is not really evident, the annoyance created by such machines is quite known. Wind turbine noise is generated by air-fold-seff noise. Six components are the ones that are generated referred to for the creation of such noise. We discussed this later, however, I have a look for yourself at a different mechanisms here. The characteristics of the edge, leading or training, and by the characteristics of the boundary layer, two-voluntal laminar. It is quite interesting to see that due to the increased size of aircraft and wind turbines, similar flow regimes and noise sources can be found between the two different machines, where important consequences on the aerodynamic and arachoustic design. This concludes our video. I hope you enjoyed it and see you next time." "Calibration Questions (Online Course Sample) – Applying Structured Expert Judgment","https://www.youtube.com/watch?v=RBOT0iEiA4k","The aim of structured expert judgment is uncertainty quantification when data are lacking. The ability of experts to quantify uncertainty is thus a central element of the classical model. The key idea of the classical model is the objective evaluation of expert assessments of uncertainty. The method proposes calibration questions to be used for the evaluation and aggregation of expert assessments. The calibration questions or seed variables are questions who are known to the analyst at the time or shortly after the illustration, but do not be known by the experts. Similarly to the questions of interest, the calibration questions should regard uncertain quantities, for which experts can provide their assessments. The calibration questions need to cover the same domain expertise as the questions of interest, as much as possible. That is because an important assumption of the classical model is that experts' performance on the questions of interest is the same as the performance on calibration questions. To consider a simple example, suppose the question of interest is the following. What percentage of India's population will be resistant to antibiotics by 2025? Then the following question, how many cases of antimicrobial resistance were reported in the state of Karala in 2017, can be an example of a calibration question? Finding good calibration questions is a crucial step in any expert judgment study. A very important role is that calibration questions should not be Almanac questions. We consider Almanac questions, those that regard information which can be easily recalled by experts. This is of course no main specific, but to give a simple example of such an Almanac question, consider the question of interest when will man land on Mars. Now the question when landed on Moon would be a question which every expert and probably non-expert would know. And therefore, not be an appropriate calibration question. Usually data coming from official reports or data which are not publicly available are used for the calibration questions. The ideal scenario is when the answers to the calibration questions are known soon after the illicitation. For example, suppose the annual report with statistics of high interest for the study is known to be released in September. Planning the illicitation a couple of months earlier would then be ideal. In this case, the data are referred to as predictions. Unfortunately, this is not always possible. Then one would have to rely on data already available. The data are then referred to as retradictions. As mentioned before, good sources are official, but not yet public reports or recent and again not publicly available data. Sometimes the calibration questions cannot be chosen from the same domain as the questions of interest. This can be simply due to the fact that there are no available data, yet for that given domain. Think about newly developed drugs or technologies, for example. In this case, an adjacent domain is chosen for developing calibration questions. Developing good calibration questions is not an easy task, but it's probably not also the most difficult thing you will have ever to do. Plus, the effort of developing good calibration questions will be reflected on the quality of expert data. The same advice from Roger holds for developing calibration questions. One can do it well or can do it badly. Good luck developing good calibration questions." "Introduction Video (Online Course Sample) – Applying Structured Expert Judgment","https://www.youtube.com/watch?v=IHQIlCeVAiQ","Have you ever encountered a complex decision making process which was hindered by the lack of data and you did not know what exactly can be done? Have you ever wanted to use expert opinion in a more rigorous manner but did not actually know how? Have you ever wanted to use structured expert judgment but were not sure exactly how to apply it? Well, now it's the time to get a complete hands-on experience for applying structured expert judgment. My name is Tina Nanen and I am truly excited to embark with you on this initiating journey. During this course you will be able to focus on your own expert judgment study or choose from exciting and relevant topics. You will then design and carry out your own expert elicitation. You will find your experts and run the elicitation. Finally you will analyze the expert data using one of the most rigorous methods of evaluating and aggregating expert opinion. That is the classical model. The classical model is also known as the Cook's Method or the Delf method. That is because it has been developed here at the Delfth by Roger Cook. You might know that Dutch are famous for cycling. The bikes are everywhere so their number and shape are impressive. Since cycling is a symbol of Dutch culture, the classical model is nicely represented using this adapted tandem bike. I'm vitifurr ride of this cool bike. During the course you will receive extensive guidance and feedback from the instructors to follow the appropriate steps for riding this bike. That is for conducting the expert elicitation. You will receive materials which you can use in any future elicitation. You will learn about the duels and don'ts of a structured expert judgment elicitation and benefit from the experience of the course team. Enjoy the ride!" "Comparative Cognitive Mapping – Water Strategy and Planning","https://www.youtube.com/watch?v=iqoUw40T79c","In this clip we will be looking at comparative cognitive mapping as a method to analyze actor perceptions. Let's start with a short story that is often used in classes on negotiation. Two brothers are fighting over one orange. Eventually their father offers a fair solution. He cuts the orange in half and the boys each get one half. The older brother only uses the zest of his half to garnish a cake. The younger brother squeezes his half of the orange for the juice. Surely a better deal would have been possible. Strategic actors have expectations about the consequences of their decisions. If we understand these expectations, we can understand their decisions. And if we understand the expectations of different actors, we can perhaps identify better decisions not yet considered by any of them. Cognitive maps offer a structure to map the assumptions of actors. What is causing the problem and what could be effective solutions and why? Organizing these assumptions into a causal diagram helps to put structure to the decision making. A basic cognitive map consists of factors and causal relations. If there is a positive relation, an increase in factor A will lead to an increase in factor B. A decrease in factor A will lead to a decrease in factor B. For instance, for a shop, more sales will mean more profit. The cognitive maps that we use contain four types of factors. Actions show the decisions an actor can take. A shop owner can decide on the quality of items on sale and on their pricing. Gولs describe the relatively unique aspect of the game. A shops with another Na for a short period of a lifetime. Therefore, a publisher member allows auctionfights on the forms often update with the company and the assistant and sixty individuals to hunt for the whole game and away. goals describe the values of a decision-maker. A goal might be to make a high profit. System factors are intermediary factors that help unpack the causal mechanisms involved. For instance, the quality of items in the shop may influence their appeal for customers. Likewise, the pricing of items may influence if the shop can compete with items in other shops. And then there are fourth context factors. These also influence system factors, but they are beyond the sphere of influence of the decision-maker. For instance, on a very cold or a very rainy day, there will be less customers in the shop. We can use causal diagrams to capture all assumptions about a complex problem in one large causal map. But what if we would capture the assumptions of different actors in different maps? We would then have a basis for comparison and we can clearly see both differences and agreements in the perceptions of actors. Just think of what you can do with such information. You can see what the key issues are that most actors care about. Or you can see if there are issues that are not underrated for most of the actors, but which are in fact crucial to a few key players. Or you might see disagreements about the evidence-based behind decisions. Such insights can help you to think of ways to build a joint evidence-based or to exploit differences in thinking from mutual benefit. Just like the example of the orange with which we started our video. So, in summary, cognitive maps help you to capture the problem perceptions of decision-makers. Comparing the perceptions of different actors helps to understand their decisions, helps to identify knowledge gaps or disagreements about the evidence-based, and may enable you to propose interventions that are of mutual benefit. The next question obviously is how to construct and analyze these cognitive maps. We will look into this in our next clips." "Actor and Strategy Models (Course Sample) – Multi-stakeholder Strategies Online Course","https://www.youtube.com/watch?v=5T18XJ22MBI","The task is simple, the tools are clear, or are they selecting the best tool for the job is not always easy. This also applies to actor models. There are many models to choose from. So how do you decide? A conceptual framework can help. In this clip I'll explain how. Your task is to get a better understanding of your actor environment. So what is there to know about actors? Many things. Better yet what is essential or fundamental to explain strategic actor interactions. Well, let's start with two fundamental assumptions. The first assumption is that of rationality. We assume reasonable actors who choose their actions with certain objectives in mind. The objective might be as simple as hanging a painting on a wall or it might be something more complicated. Let me be clear here that we assume rationality does not mean that we assume actors who know everything. They may be misinformed, confused, or see only a part of the picture. We only assume that they do have certain objectives, interests, or values that help to explain their actions. The second assumption is that of resource dependence. We assume that actors depend on resources controlled by others. Resource dependence can take various forms and gradations, but the idea is simple. Someone wants to hang a painting to the wall and someone else has the tools required for the job. Based on these two assumptions, we can try to sketch actor behavior. Let's have a look at the core elements needed for making such a sketch. Actors interact in a certain setting and in a certain network. We call this a decision arena. This can be a neighborhood, a management team, or a professional platform. The arena determines who is involved and how they interact. This means that every arena consists of actors and the rules that guide their relations. Relations and rules can be formal as well as informal. Formal rules are often written down and officially approved. Informal rules, for instance, tell you what is appropriate when talking to your neighbors or when you are participating in a different setting of a management meeting. Within an arena, three basic concepts explain the actions of a strategic actor. Values tell us what matters to a particular actor. You might value a pleasant environment and therefore your objective might be to hang something nice on the wall. It's not a very urgent desire, but it would be nice at some time in the future. Resources tell us what needs actors have to influence the world around them. You don't have a nice painting yet to put on your wall, but you have money to buy one. And you're not in a hurry anyway, so you also have time. Third perceptions tell us how actors understand their situations. How do they think the world works? What do they know about the state that the world is in? Let's assume that your best friend is planning to bring you a beautiful painting tomorrow. He didn't tell you because it's a surprise. And precisely today, your neighbor is driving you crazy by constantly hammering nails into the wall. You lose your temper and get into a dispute with your neighbor, swearing at her about her stupid hammer. If you would know that you would need that very same hammer the next day, your actions would probably be different. Together, these concepts provide a framework for actor modeling. Different actor models focus on different parts of this framework. Comparative cognitive mapping helps to analyze perceptions. Social network analysis helps to analyze the relations among actors. Game theory models help to analyze the resources of different actors. So we have looked at the conceptual framework for actor modeling because this helps us to understand why we are using a particular type of actor model. There are two fundamental assumptions and different basic concepts at network level and at actor level. We now have a sound basis for the use and selection of actor models. But of course, there is more to it than this. A case illustration of different models will help to get more insight." "Social Network Analysis (Course Sample) – Multi-stakeholder Strategies Online Course","https://www.youtube.com/watch?v=dBFzoS6acps","When must the last time you checked upon your network? Maybe at a conference trying to score business cards or when you were browsing through your LinkedIn contacts? How does your network influence your behavior? How does your network give you power or block you in achieving what you want? Network analysis assumes that the behavior of actors like you depends on the structure of the actor network. Let's say we want to predict your salary. Of course we could look at your education level or your years of working experience. But maybe it's much more insightful to look at who your close relations are and your organization and that you can control crucial information flows between other people. With network analysis we look at the structure of the network to explain behavior and outcomes. That's what makes network analysis methods unique. Network analysis can be very powerful too to understand the behavior and strategies of actors. In this video you'll learn what kind of questions the method is most suitable. Let's take an example of how networks lead to innovation. I was doing a study for a Dutch energy grid operator. They are on a quest to watch her renewable energies and they know they cannot do it alone. So this company was heavily involved in all kinds of innovation projects with other actors. But they were confused. What's our position in this innovation network? Are we working with the right partners? What if we would end some of these projects? What the innovation network fall apart? Now network analysis helps you to answer these types of questions. Let's look at a simpler example first. Let's say you're giving a birthday party. You can only invite six people. What makes the best birthday party? If you are friends with all of them but none of them know each other like here in picture A or if all six are also friends of each other like in pictures C or perhaps something in between? Now picture C is what we call a dense network. All actors are related to each other. So theory we know that dense networks produce joint activities. So probably this birthday party will be the most lively. This picture A is a very sparsnetwork. Perhaps less lively because people don't really know each other. But here you have a much better chance to hear a new gossip. Because theory tells us that sparsnetworks carry much more new information. So how does your network at work look? How is your large project more like network A or B? And how did it impact your work? Now there are of course a lot of other things that affect your birthday parties. The people's age, friendliness or amounts of jokes that they know. In other words the properties or attributes of people. In our example we did not consider these attributes. We just looked at relationships. That's the core assumption of network analysis. A structure of our relationships governs our behavior. This assumption is what makes network analysis so interesting. But also fundamentally different for most other methods. And we will see that it creates quite a few challenges as well for executing the method. So to summarize, network analysis explains the behavior of actors by looking at the relationship structure, not the attributes of individual actors. Actors, they can be anything. People, teams, organizations, industries, countries, ends. It's most useful in situations where actor relationships matter, where it's about information flows between people or organizations, power positions, click behavior, etc. In this way it's uniquely different method for most that you might know. And it offers great opportunities for answering your search questions. But also great challenging for collecting and analyzing data." "Dr. Fatih Birol's interview for: ""Inclusive Energy Systems- Exploring Sustainable Energy for All""","https://www.youtube.com/watch?v=qAqKUIZqxe8","If the countries bring electives to their people. If they choose, in addition to renewables as we talk Mozambik, Tanzanese, Devlet-Afuna, Neşu Gaz. If they use Neşu Gaz, I have no problem with that. We cannot put the responsibility of the global emissions and Afrika's, Afrika's people. Afrika's responsibility today on global emissions is less than 1.5%. So, therefore, to tell the Afrika's, when we sit in Paris, or in Amsterdam, or in New York, or in Tokyo, no, you cannot, you should only use renewables. It is not a position that I would share. We should leave the decision to the Afrika government, Afrika people. If they use renewables, the best way, but if they think for the economy's, Neşu Gaz'iz, the solution today, they should go for that. Because for me, it is very important that those power plants provide electricity to the villages, for the parents, to keep the medication of the children in the refrigerator. If we want to reduce the emissions, there are many places in the world, which can reduce emissions much easier than Africa with less pain." "Adaptive Planning - Online Course (Introduction Video)","https://www.youtube.com/watch?v=qxUsWyOL_ZI","In this coastal area you can see that the world is changing very rapidly. You can see all the opportunities and threats. Time to change, to level rise, new possibilities for trade, new technologies, along with demographic changes and financial crisis. These are all factors that are very difficult to predict. Now what does this mean for flood risk management and for poor expansion and city planning? And what does this mean for you? A professional that needs to make future plans despite all these uncertainties. In this online course we will teach you a new way of thinking to address these questions in an adaptive way. At the Delft University of Technology we have combined the Relief and Research into one course that will provide you with the how of adaptive planning. In this course you will learn how to use adaptive pathways in developing your plans. We will introduce you to decision making on the deep uncertainty and step by step hands on and even thankful way. Within a highly interactive and personal online environment you will learn by doing. You will use an online serious game to evaluate how effective you can be when planning using adaptive pathways. For this update like this we cannot give you a simple handbook. Instead we will equip you with new tools and methods but most importantly a new way of thinking. That way we will improve your long-term planning and decision making skills. Curious? Head over to our website to find out more." "Culture Sensitive Design - Crossing Cultural Chasms - Card Set","https://www.youtube.com/watch?v=ZFW1yKqlH-c","As designers, we always try our best to find out what end users want, as well as what they need. But do we always understand exactly what it is that matters to them? We are, after all, often outsiders in an increasingly multicultural and globalized environment. The traditional design research we carry out can get us closer to understanding what users need, but it is rarely enough. Users' cultures can express themselves in many different ways. We need to look more attentively and get closer to achieve acceptable results. We have produced a card set, a hands-on tool that helps designers cross the cultural Cazems to achieve effective user-product interaction in any cultural context. These three sets of 16 cards each let you see beyond the obvious. They give you a lens to examine your user's culture, and ultimately help you connect with your intended users. So you can avoid culture-related mismatches and moreover, find new and even better solutions. With visual explanations and examples, every card helps you find how relevant culture can be for your project, and which cultural aspects matter the most. Whether alone or in a team, with a card set, you can work faster, with greater confidence, and use the potential of culture to produce effective designs." "From Optical to Electrical Modeling","https://www.youtube.com/watch?v=n9auQeX6hUE","Hello everybody and welcome to the electric modeling part of this course. Here we will discuss how the results of characterization of the properties of materials and of optical simulations of a PV device are used to complete the modeling of the overall performance of the device. In fact, the electrical modeling of solar cells results in the estimation of its performance expressed in terms of external parameters such as current, voltage, power and conversion efficiency. The correct modeling of a PV device performance depends on the accurate characterization of materials and device properties and a rigorous optical modeling to determine an absorption profile in the device on the illumination. The calculated absorption profile determines where and how many charge carriers are generated within the device. This is described by the generation rate of charge carriers. Properties of individual materials that form the device such as conductivity, carry-emobility, doping concentration and others determine how many of these photo-generated charge carriers at the end contribute to electricity generation. This allows to quantify the final performance of our software model, both in terms of current density voltage characteristic and spectral external quantum efficiency. It is this important that both aspects, characterization and optical modeling are as accurate as possible. In this way, results of the electric modeling describing the performance of the PV device can be considered reliable. Several different computational methods are available for the electrical simulation of photo-voltive devices. In general, they are or can be characterized by their approach in terms of geometric description of a device structure and the numeric modeling approaches for calculation of the device performance. When it comes to the geometric description, one dimensional or more complex two or three dimensional approaches can be selected. The choice depends on the material structure, interface morphology and device architecture. One dimension of models are in general simpler, but still effective enough to simulate solar cells with flat interfaces or randomly textured interfaces. On the other hand, multi-dimensional methods become necessary when a device contains multi-phase materials and interfaces based on periodic structures. Examples are periodic ratings that affect the absorption of light and special contact configurations that influence the flow of charge carriers in the device. When it comes to the performance modeling, there are approaches that are mainly used and there are two of them. In the first one, the device is divided in a finite number of elements, semiconductor equations that form a mathematical description of the operation of the device are numerically solved for each of these elements. And the liver results of a variable parameters. From these variable parameters, the device performance is calculated. In an alternative approach, the entire solar cell is modeled with an equivalent electrical circuit consisting of various lumped elements. In the remaining part of this course, both the geometrical description and the various numerical modeling approaches of electrical simulations will be discussed. Proves and cones of different methods will become clear and you will be able to select the best modeling approach for a solar cell you want to study." "Introduction to PV Characterization","https://www.youtube.com/watch?v=YNnPZ4eCqK4","Characterization, why is it important for the modeling of photovoltaic materials and devices? Welcome to the characterization part of this course, where in the coming week and a half, we will answer this question, discuss in the characterization of PV materials and devices and the their importance in the context of PV modeling. For the purpose of this course, we will use the term characterization to indicate a set of processes and measurements used to determine the properties of PV materials and devices. These characteristics are fundamental inputs for all types of simulations. For this reason, there must be carefully and precisely work out to avoid wrong simulation results. Several characterization methods are usually employed in the field of photovoltaics. In general, they can be grouped in the five categories. Optical characterization, electrical characterization, analysis and morphology, structure and composition and the determination of the device performance. Each of these five aspects is essential for the success of simulations and must be carried out before the proper modeling activity can start. Optical characterization includes techniques that determine the optical properties of materials. These can be used as inputs for optical simulations. The optical properties are mainly the refractive index and absorption coefficient of such materials. Moreover, these measurements that can be used to obtain additional information on materials and films. Examples are the density of defects, the thickness and the roughness of the layers such as the ones commonly present in the solar cells. Several methods are used in the photovoltaic sector to characterize the optical properties of materials. In this course, we will focus on a spectrophotometry, which allows the measurement of wavelength dependent reflectance and transmittance of layers and the devices. We will then discuss a lipsoometry, one of the main techniques to determine the refractive index and absorption coefficient of materials. Both spectrophotometry and the lipsoometry can be also used to obtain the thickness and the roughness of the layers. Finally, we will talk about spectroscopy. In particular, a photo thermal deflection spectroscopy known as PDS and a Fourier transform of photo currents spectroscopy or FTPS. These two methods are particularly useful for the analysis of defects and subband gap absorption, layers and the solar cells. The vertical characterization is used to determine the electronic behavior of materials, which are then used as input for electrical simulations. The most important properties that can be analyzed are the lifetime of charge carriers, the conductive of materials, layers and the concentration of doping. Among the various available methods, we will discuss measurements of a carrier lifetime using the quasi-steady state photo conductance method, which is created to QSSPC and the photo luminescence or PL. For the conductivity characterization, we will focus on dark conductivity measurements, which can also give information on the doping concentration by calculating the activation energy of a doped material. The characterization of the morphology of layers and solar cells is carried out to obtain information on the geometry of a photovoltaic device. These are useful to correctly build up models in both optical and electrical simulations. The most important parameters are the thickness and the surface roughness of layers and solar cells. As I already mentioned, spectrophodometry and ellipseometry can be used to determine these two aspects. Other methods include different types of microscopy, like atomic force, scanning electron and the tunneling electron microscopy. The characterization of structure and composition gives information on a material at the atomistic level. In particular, the crystal structure and the chemical composition of materials can be determined. This information can be useful for some types of simulations, like abinitium modeling based on the density function of theory. In addition, the knowledge of a material crystal structure and the chemical composition is important for the modeling of some optical and electrical properties, for example, in the case of ellipseometry measurements. To determine the crystal structure of layers, metals like x-ray diffraction and the raman spectroscopy are commonly used. For the chemical composition, we will focus on the secondary ion mass spectroscopy and the ojeletron spectroscopy. Finally, we will talk about the characterization of PV devices. These measurements determine the external performance of a device. This is useful to evaluate the quality of simulations in the process known as the calibration of modeling platform. The performance is determined as function of the electrical potential in so-called the current density voltage measurements, and as a function of spectrum in quantum efficiency measurements. JV measurements can be conducted in the dark and under illumination to understand a solar cell electronic behavior. Regarding the quantum efficiency, it is important to measure the EQE, the external quantum efficiency, but also to calculate the IQE or internal quantum efficiency. To better understand the optical and electrical behavior of a PV device. Before we begin with the discussion of different measurements, it is important to talk about the importance of a correct characterization. The correct characterization of a PV material and the devices is fundamental to obtain accurate results with simulations. The optical, electrical, morphological, structural and compositional properties are all crucial inputs of optical and electrical simulations. Also, the measured performance of devices is important for the calibration of simulations. Modelling can give incorrect results if the wrong input properties are used or if the device performance is incorrectly measured. In this reason, material properties and the device performance need to be determined in the best and widest possible way, commonly as function of four parameters, electrical potential, temperature, intensity of light and spectral distribution. Only if all these four aspects are taken into consideration, the results of modeling can be trusted. With this message, we conclude this introduction to the characterization of PV materials and devices. In coming mediums, we will talk about the virus measurements and their relevance to the modeling of PV devices." "Introduction to Optical Modeling","https://www.youtube.com/watch?v=DIgTmsUpfQE","Welcome to the Optical Modeling part of this course. In the coming lectures we will talk about the various aspects of Optical Modeling of Photovoltaic devices. We start by asking the question, what is the Optical Modeling of PV devices? With the term Optical Modeling we indicate the series of steps necessary to model the Optical performance of a solar cell. This performance is normally quantified by the absorption, reflection and transmission in all layers of the solar cells. These are three quantities, although to determine the useful absorption in the active layer of the device and the optical losses expressed by a reflection and transmission of the device and by parasitic absorption in supporting layers. In the slides you can see the example of absorption in all layers of a tandem device of solar cell. Several quantities can be determined from the results. First, as depicted in the graph, the implied photo-carrendensity generated in all active layers in the device indicated with the letter J. This quantity indicates the maximum current density that can be generated in the active layers. These assumes that all absorbed photons generated a whole electron pair and that all holds and electrons are collected. Then information about absorption in the active layer can be converted into the generation rate of charge carriers indicated with G. This quantity is of great importance since it is one of the fundamental input parameters for electrical simulations as explained later in the course. Depending on the selected modeling approach this parameter G is determined only as a function of the depth within the device hence as function of only one dimension or as a function of the exact three-dimensional position inside the solar cell. In accurate optical modeling depends on the correct properties that are mind during the characterization. In particular, the use of correctly measured the refractive index values is fundamental for every layer of the solar cell. In addition, a correct characterization of the device morphology is crucial to build up an accurate solar cell model. Finally, the measured device performance can be used to calibrate the chosen simulation platform and ensure that the obtained results can be trusted. The overall methods are available to model the optical performance of photovoltaic devices. First they can be divided by the domain in which equations are solved. In frequency domain approaches the material optical properties are defined as a function of the frequency of light hence as function of wavelength. Equation are then solved for every wavelength in the specified range and at every point in the defined structure. On the other hand, time domain approaches can resolve a equation for a wide frequency range in a single simulation at every point in the solar cell model. However, the optical properties of every material are measured as function of wavelength. As such, they need to be fitted with specific function depending on the type of material used. The fitting of some material is more challenging than for the others as it will be explained. For this reason, in some cases, it might not be convenient to use time domain approaches. The selection of the appropriate modeling tool also depends on the dimension of the model device and textures. When a very thin layer of use or textures with the sides smaller than the wavelength of light in the material, we must use a method that is able to deal with wave optics. This means that the software needs to be able to model complex optical phenomena like diffraction and interference. On the other hand, when only a thick layer of textures with large features are included, methods that can only deal with a geometrical optics, by which we mean a reflection, are sufficient to accurately characterize the optical performance of a photovoltaic device. A final distinction can be made between the rigoros and the no rigoros methods. Rigoros methods include all methods that obtain their solution by calculating the electromagnetic field inside the solar cell. This is usually achieved by solving a Maxwell equations within the device volume. Several methods are available and will be described in the following lectures, including a finite difference time domain or FDTD. The finite element method are abbreviated to FEM and the rigoros will couple the wave analysis or RCWA. On the other hand, the no rigoros approaches do not compute the intensity of the electromagnetic field inside the device. Other, they rely on equations and models that describe the propagation of light in different conditions. For example, the Lambert's B-ers law is used to describe absorption in a thick uniform layer or the Scarascata Rint theory is employed to model the interaction of light with small random textures. These approaches aim at making the problem to solve a simpler, but can be as accurate as rigoros methods if properly applied. Other methods have been developed throughout the years, including the Jemper of Software here in the FDTD, the optos platform developed at the front-over is a or the Chrome suite by the universe of Lubbiana and many others. In the following lectures, both rigoros and no rigoros methods will be introduced. You will have the chance to improve your optical modeling proficiency by using the fourth version of our In-house developer Jemper of Software." "Culture Sensitive Design - Global Design Research | Online Course (excerpt)","https://www.youtube.com/watch?v=9vViZdG3nK0","Global that I research is a field that hardly kind of exists in terms of publications and so on, because it exists in practice, I think, with many people I kind of trying to find their own way of doing things. And though it's becoming more and more important to think on the global level, our globalism, it's really important in the world. There is very surprisingly little actually published about global design research and how in particular how you bring different cultural perspectives and how to make that work on the global level. We felt a need to kind of reflect on the work that we have done over the past nine years or any with his network and to bring it down to a few kind of insights, reflections, principles even a small manifesto in the back as well, to share with the world what we have wrote." "Culture Sensitive Design - Cultural Values and Practices | Online Course (excerpt)","https://www.youtube.com/watch?v=D-RG2jsOjWw","In this video I will give you a lens to look at culture. This lens with which you can examine a culture or compare cultures is one way to approach the concept of culture which means that there are also other ways to frame culture. So this is my disclaimer. The lens I propose here is actually focusing on relationships between people about how they preferably interact with each other in a specific group. A starting point is that products are somehow mediators of these relationships. This will become clear in the course of this video. Based on these dimensions and other cultural theories, I composed a set of social, cultural dimensions specifically for designers. They can be used for any cultural group and not necessarily on a national level. And they should be used without the course. This is important because through research we concluded that using scores are not suitable to use in design projects. Firstly because users of products are often not organized by nation. And secondly the scores are based on averages of large populations that do not necessarily represent the people where you design for. Even if the scores were good indicators for the preferred value orientation, then still there are so many factors that the designer could consider as important. For example the product does not necessarily support existing values but could be designed with the intention to change a culture or to bridge two cultures with each other. So just applying scores can be misleading. However the dimensions are useful if you use them in more qualitative way. For example they can be used as a checklist to consider which values are important for you as a designer or for your project. It's important to be very specific about the image and situation. Here you see how a design team used the list of dimensions to communicate and the current the red dots and the desired blue dots value orientation. They were aiming for when designing a ritual. Okay and the next video I will explain these dimensions one by one. I hope to make clear with product examples what these socio-cultural dimensions that are in principle based on relationships between people can mean for you as a designer." "Culture Sensitive Design - Key Terms | Online Course (excerpt)","https://www.youtube.com/watch?v=cNejv0gDJw0","During the course you will be familiar with terms that are used by cultural theorists. And these distinctions are useful to indicate an interpret certain aspects of cultural processes. If you have a clearer understanding about what these terms mean and what they could mean for design, then it's easier to use them also in your work. To get there, you are therefore asked to study each key term and find your personal examples. You will use a template with the key term, a definition, a picture that illustrates your example and a few sentences to help explain your example. During the course you will be asked to upload your template and share it with the other participants. In this video, six key terms will pass. Global culture, dominant culture, zip cultures, folk culture, high culture and high mass or popular culture. Global culture refers to the way cultures all over the world." "EUCalc MOOC teaser","https://www.youtube.com/watch?v=vtALJBz35lw","Hereup is setting the goal to combat climate change through collective effort. Every day, more technologies are developed to help us transition to decarbonise your own. To ensure a bright future for all, we want to understand the possible paths for making decisions. However, information on the various developments and possibilities is scattered. Consequences and trade-offs are unclear and are many divergent interests. What if we could combine this information into a comprehensible tool, a single webpage? Then we could make informed decisions about the future of Europe. EUCALC provides the use of friendly modeling solution, accessible free of charge. It calculates greenhouse gas trajectories, social implications, cost limitations and future energy demand, on the ambition level of your choice. And online course on climate change mitigation is offered to future decision-makers. With a help of EUCALC, we can make informed decisions about our path to a bright future." "Energy Friendly Renovation Processes - Online Course - The Merger of Interests (excerpt)","https://www.youtube.com/watch?v=Ms9kHeI_nvA","But it's not only the professionals who have to support this task. The residents have to beck it as well. Especially the residents, after all, very little is possible without their consent. This means that a lot of different parties have to have to want and different innovations. But how can this be achieved? There is no clear answer to this question. As the circumstances vary far too much from project to project. However, it is possible to develop a way of thinking about and looking at the task that increases the chance of actually achieving the objectives. And that brings us back to the title of this lesson, the merger of interest perspective. This is a way of looking at the task that has emerged from an interaction between science and practice. A way of looking at things that may seem up-fused at first, but offer the consideration turns out to be quite different for what's usual. This way of thinking and acting is not only characteristic of energy-friendly innovation, but it's a way of looking at the sustainability task in the construction sector in the broader sense. A way of looking at my colleagues and I have developed on the basis of existing scientific knowledge and experience gained over more than 25 years. This way of working offers added value, not only for the construction sector. Thanks to its simplicity, it appears to be generally valid and also useful in many other sectors. It is by no means a unique approach, because there are all kind of scientific theories that are in some ways similar to this approach, and which served as inspiration during the development of the merger of interest perspective. These include Rogers diffusion of innovation theory, the Harvard Negotiation Project, Susskin's consensus building approach and S-TN Winston's strategies for building e-glid foundaches. There are probably even more sources on similar perspectives. Within this course, we will use the merger of interest perspective as a theoretical basic principle. It forms a glasses to which we will view the task of energy-friendly innovation. The central idea behind the merger of interest perspective is that sustainability measures, in other words measures that represent the public interest, can evoke desire, even in people who are not yet concerned about the public interest, for instance, people who are not at all interested in energy efficiency. Then, is science journalist Lona Funk provides a good illustration of this desire in her book, The New Road Tourist, postcard from the Edge of Brain Science, and previously published on the title Mindfield. In this book, she interviewed some of the world's leading brain scientists on what she feels is the impending neuro-reverlution, and on page 236, she writes, every time one of them male or female, so a product they really liked, blood rushed to a little area towards the front of the brain. The medial prefrontal cortex led up like a beacon in the images. This is what the merger of interest perspective is all about. How can you get sustainability measures to rely the beacon in the front of the brain? And in the specific case of energy-friendly innovation, how can you do that with energy-related measures in existing housing? So, the merger of interest perspective stands for the aim of lighting a beacon in the brain with sustainability measures. The merger of interest approach developed from this demonstrates in three steps how you can achieve this for a specific project in practice. The first step is identify the interest of the people involved in the project. In other words, the people here and now. What are their interests, indeed? What do they dream of?" "Energy Friendly Renovation Processes - Online Course (excerpt)","https://www.youtube.com/watch?v=nTYF3DIzPFw","To reach the highest possible level of energy reduction, a combination of active and passive measures is needed. And this level of energy efficiency can be achieved in your housing construction, but it is also possible by refurbishing existing dwellings. Level two will create dwellings that almost need no additional energy. Level three involves high-end renovations, such as the Pre-Aluge House that is shown here. At this level, the dwelling is improved in such a way that over the course of a year, the dwelling produces at least as much energy as the occupants consumes on average. Let's have a closer look at level three. At this level, your affordability of technical measures is key. A affordability can be reached through innovative payment arrangements, where the current amount of the energy bill is invested in renovation. And as a result, the occupants pay the same as they did before, the rent will be higher, but the energy bill will be lower. And in this way, high-end sustainable housing investments can still be affordable for residents and financially attractive for the landlord. We call this net zero energy solutions. Well, affordable net zero energy solutions can be achieved for a single home. It is hard to develop viable business case for only one dwelling. And to make this high-level refurbishment cost effective, product innovation, industrialization, and increase in scale is needed. The net zero energy approach is therefore mainly suited for the construction or the refurbishment of series of dwellings. In the netlands, landlords and construction industry partners have established a program called the Accelerator to meet these requirements. And together, these parties want to refurbish tens of thousands of dwellings. And one example of that program is shown here. Level four introduces two interrelated perspectives to sustainable housing management. This is the social perspective and the neighborhood perspective. Considerations for this level go beyond dwellings alone. Your approach considers all energy and financial flows in the neighborhood and linked them to existing social needs to create a new business case. The ambition of energy-friendly renovations serves as a driving force to achieve other objectives in the neighborhood. For example, by using residual heat from nearby factories, by sharing facilities such as windmills biomass installation, solar park, cars, and very Dutch bikes. Engaging residents in sustainable housing management can be the starting point for other activities that increase social cohesion and sustainability and to create more attractive neighborhoods. By adopting a perspective that goes beyond the individual dwelling, neighborhoods can be created that might well be able to produce more energy than they consume." "Ask the Professor Section - Railway Engineering: Track and Train Interaction | Online Course","https://www.youtube.com/watch?v=R4BTeY2ety0","For example, I have here an ID how this is looked like. You have here a cross-section. You see the goat went some embedded material and profile. This profile is really special because it's a fairly small one. Normally you need a high one because you need resistance against bank. Therefore you need a high rail because otherwise you don't have this badly. Therefore this small rail helps and it looks like it's not good. But this one helps to get rid of noise. This noise is really low and it will save you more than 3 dB. And that's also nice because we don't like noise in a rail or this. But we can have this load profile because you have a big resistance against bending because of the construction of embedded rail. Another way is that we have here with the concrete where blocks between you can just make normal track. But we have here as you can see outside. So you need a just normal track. This looks like sleepers but it is embedded in concrete place. And it's a block. You also melt them together and come. I think that's another way of a better rail. This one we use sometimes in terms of help. Here I see two specimens about the better rail in the low profile, high profile. And there you can see also the differences. But in stiffness it could be the same way you make it in this construction." "Thermite Welding - Railway Engineering: Track and Train Interaction | Online Course","https://www.youtube.com/watch?v=NYBZC_zLCUQ","Thermit welding of rails. The rail ends must properly cut so that the gap between the rails has the required size. Also, the rail must be properly aligned by controlling the joint with the old railing edge and the top of the rail. The next step is installing the mold with the cross mold. It is important that there is no gap between the mold parts so that the liquid still will not flow out of the mold. Most sand is used around the mold to create the cross mold. After applying the high temperature, the sand will be hardened and become ceramic. Before the welding, the rails within the cross mold have to be preheated to the required temperature. So that the rail properties near the weld after the welding will be as homogeneous as possible. After the preheating, the thermit portion is placed by the glass bowl and the ignited. As a result of the chemical reaction, the liquid still flows into the mold, filling up the gap between the rails. After the initial cooling down and possible, post heating period, the mold with cross mold can be removed. The excessive metal around the weld in the rail has to be removed as well. Obviously, the shape of the rail and an outside the weld is not the same. Therefore, the original shape of the rail head surface will be restored using the grinding machine." "Braking System - Railway Engineering: Track and Train Interaction | Online Course","https://www.youtube.com/watch?v=5hrb5IgyVss","Breaking systems. The brake is needed to stop a train at the desired position. Based on the braking mechanism, two types of brake can be recognised, adhesion and non-adhesion brakes. Mechanical brakes, which are the most essential type of brakes, consist of tread and disc brakes. Disc brakes can be mounted either on the axle or on the wheel. At braking, the discs are clamped by brake pads on the calipers to be the rotation of the wheel. Due to the space availability in the bogies, the axle mounted disc brakes are usually used on trail of bogies. While the wheel mounted disc brakes are used on motorized bogies. Atthesion decreases as speaks increase, that can result in wheel sliding during braking. Therefore, the rail brakes were introduced. That do not depend on adhesion and use adecorrent and frictional force." "Principle of Bogies - Railway Engineering: Track and Train Interaction | Online Course","https://www.youtube.com/watch?v=_hG0wZtZWsc","Usually, rail-movical consists of rail-car and two-boggles. Bogus are the main part of a rail-movical. Usually, unnoticed by the passengers, they are very important for safe rail-movical operations. The main functions of the bogus are to support rail-car body, to ensure stability of rail-movical on both tangent and curve track, to provide good right comfort of passengers by absorbing vibrations and light-nearated by short and long-wave irregularities of the railway track. Bogus can be either motorised, that is, there is a motor on engine embedded in it, or trailed, that is the bogus without engine. The main elements of the bogus are the frame, the wheel sets, and the suspension system. A motorised bogus has additionally the traction motor, wheel gear, and transmission system. Here we can see a motorised bogus. The motorised means that motor is embedded in the design of the bogus. Here we can see the motor, here we can see the gearbox, here we can see the wheel sets. Suspension system. Suspension system of the bogus consists of a primary and secondary suspension. The primary suspension is connected and will set to the frame of the bogus, while the secondary suspension is connected to the car body. Design of the suspension system can differ depending on the bogus, but the main functions of the suspension are the same. This is the motorised spring, the secondary suspension of the bogus, the suspension system, and secondary suspension is responsible for the comfortable ride of passengers and absorb the low-frequency vibrations of the bogus." "FRP versus Concrete and Steel – FRP Composites in Structural Engineering | Online Course Sample","https://www.youtube.com/watch?v=0IbDsNCBvUo","One of the main selling points of FRP composites is that they combine low weight with high strength and relatively high stiffness. It is time to give some numbers to back up this statement. In this video we are going to compare the mechanical performance of FRPs to that of traditional building materials. The how is the performance of FRPs in terms of stiffness and strength? And how light is light weight actually? We will compare with concrete and steel, but also include the light weight metal alloys of aluminum and titanium. That first consider stiffness. We will plot the stiffness versus density to classify the different materials. The plot is in a lock lock space so that we can cover different orders of magnitude in the stiffness. Our ideal material is going to be located in the top left corner, low weight and high stiffness. We start by placing steel and concrete. Steel has a density of almost 8,000 kg per cubic meter and a stiffness of 210 gb. The stiffness of concrete lies between 30 and 40 gb at a density of 2400 kg per cubic meter. A aluminum alloys are approximately 2 times more stiff than concrete at almost the same density while titanium lies somewhere in between aluminum and steel. Now let's put some ingredients of FRPs in this chart. Glass fibers have a density that is comparable to concrete and aluminum and a stiffness that is somewhat higher. The young's modulus of glass fibers lies around 70 to 80 gb. Carbon fibers are lighter than glass fibers with a density around 1800 kg per cubic meter and carbon fibers have a stiffness that is higher than steel that 230 to 350 gb. To be fair, I should say that the values shown here are for the stiffness in axial direction. Carbon fibers have different stiffness in different directions. The axial direction is the stiff direction but also the most relevant one. Carbon is close to our ideal material. Stiffer than steel lighter than aluminum. The properties of glass are also good. However, these values are for the fibers only. Fibers have to be combined with a matrix material to form a material that can be used for structural applications. In FRPs, the fibers are combined with a polymer matrix. The polymers that are used, epoxy, polyester, maybe a thermoplastic like peak, are all very light with a density between 111280 kg per cubic meter. But their stiffness is also much lower than any of the other materials in our chart. So FRPs, for FRPs to have a good stiffness, we must have a high fiber content. The composite stiffness is obviously going to be lower than the stiffness of the fibers. The good news is that the weight will also be lower, especially for GFRP with the heavier glass fibers. This is not the end of the story. These properties are for unidirectional FRP composites that have low stiffness and strength in a direction perpendicular to the fibers. To mitigate this weakness, laminates are made. Layered structures with different fiber direction in the different layers. Depending on the application, the fiber distribution can be UD dominated or it can be such that the laminate has equal properties in all directions. This last extreme is obtained with a quasi-isotropic layup. If we also include the quasi-isotropic layup in the chart, we see that the performance is less impressive. For a real FRP design, the stiffness in primary load carrying direction can be in between the UD and the quasi-isotropic values. A stiffness in one direction that is higher than the quasi-isotropic one, it's going to have negative consequences for the stiffness in other directions, which will be even lower than the quasi-isotropic value. One more point of attention is that the out of plain shear stiffness of laminates remains low, irrespective of the fiber orientation in the layers. In any case, there are no fibers in the direction of this shear stress. If we remake this chart for strength, the relative location of the metals is similar. With a strength of 37770 mbps for steel and 200 to 414 for aluminum. The strength of concrete, even the compressive strength which may go up to 90 mbps is significantly lower than that of the metals. The polymer matrices that we use have a strength in the range of 40 to 80 mbps. The counting for the low weight, that is already good. The fibers are performing really well when it comes to strength. Both glass and carbon have a much higher strength than the classical engineer materials. At 2700 to 5,000 mbps. When combining fibers and polymer, the strength that can be reached in UD FRP is still higher than that of steel at 1 to 3000 mbps. So for strength, the comparison between FRP composites and traditional materials is more favorable than for stiffness. However, here too, the material behavior is directional. The UD material is weak in transverse direction. In multi-directional laminates, the strength is lower than in the UD material. In all cases, the interliminary strength remains low. Moreover, we have been looking at tensile strength. The compressive strength of FRP is typically lower than this tensile strength. So, we end up with a range of strength strength values. Moreover, strength is not the complete story when it comes to failure. The activity of composites is often poor, which makes it difficult to exploit this high strength fully. In conclusion, FRP composites are indeed light, much lighter than steel. The stiffness of GFRP is lower than steel, similar to concrete, but the strength can be higher. It is important to realize that both strength and stiffness can be optimized by playing with the fiber direction, although this always comes at the expense of the performance in other directions. And no matter how the laminate is designed, the out-of-plane performance remains polymer dominated." "Course Structure – FRP Composites in Structural Engineering | Online Course Sample","https://www.youtube.com/watch?v=1uICwYgesr4","Let's get started. In this first video, I want to introduce you with the structure of the course who can get the impression of what you will be doing, how and most importantly why. By creating this course for you, we had several learning objectives in mind. Those are about what you, as the learner, will be able to do after you finish this course. First, we want you to be able to recognize advantages and limitations of using fiber reinforced polymer, FRP composite material, instead of traditional materials and infrastructure and building projects. To know when to apply it, then you need to be able to make a proper choice from the wide range of different FRP materials and production processes for a specific application. After that, you should be able to deliver a realistic design of FRP structures or to perform a critical review of such designs and perform design verification of structural members and joins between them. And finally, for some of you, it is important to be able to apply classical laminated theory to compute stresses and stiffness in composite laminates. And to create a suitable finite element model for basic FRP structures and structural members and use the results for design verification. We were aware that this course will be interesting for several profiles of professionals involved in infrastructure and building industry, the designers and consultants, contractors, project managers, the architect, review is and decision makers. This course is for all of you, but the fact is that not all the learning objectives are equally important for all the professional profiles. The designers need all of them, including the detailed analysis. Consultants would probably not go into deep computation and analysis being mostly interested in the first four learning objectives about when, which and how to use FRP and how to do the design verification. Contractors, project managers and architects do not need to know that much about the design verification, but the first three learning objectives are very important for them. Reviewers need to be able to judge the choice of the material and visibility of the design and sometimes to check if the variations are in line with the regulations and to judge the applicability of finite element models and results. Lastly, but very important, the decision makers need proper knowledge to decide when or more importantly why and which FRP material to put into their projects. To make a confident decision that they also need to know what is the current state of the design codes. If you like tables, better than pyramids, then this is very you can find yourself. Maybe you do not find your exact title here, but we hope that you can find your place and the relevance of the learning objectives in your situation. So in order to achieve all these, to get new skills, you will mostly learn by doing. Therefore, the course is built around three groups of assignments that connect those learning objectives. The design assignments, analysis and design verification assignments and review assignments. You will spend roughly 50% of the envisaged time working on those rather than just passively observing. Rest of the time, you will use to study the content and discuss in the forums. The design assignment is practice on the scale of a structure and is done in a group. The analysis and design verification assignments are practice on the scale of structural component individually. The review assignments are also done individually. The conceptual design assignment connects first three learning objectives. You will work on that in groups of three to four participants which we will select. Based on the list of skills that you provide, we want to create strong teams, covering all skills so that each of you will have the opportunity to bring your best and spend time on learning objectives most important for yourself. The idea is to work on a real case of some of your previous projects. Therefore, your first task is to bring your own case. From several cases within the group, we will select one which is the most suitable to be redesigned using FRP in the online group of group kick-off meeting. Then you will continue with tasks to deliver the list of requirements and the conceptual design followed by the preliminary design and lastly the input for final design. The analysis and design verification assignments are in individual. They are based on smaller problems which predefined steps will help you to gain experience that you will use in the design group assignment. The first task is to use the simulation tool that we created for you and to compare the results to some simple formalized for lemonade. Then you will work on the design of a cross-section for simple hollow beam. This will help you to get first experience and glide in your conceptual design group assignment. You will also work on optimization of the material composition for certain combined stress state which in one hand will help you to bring your group conceptual design to the preliminary design level. Using the design recommendations to select the safety and conversion factors for the verification of the simple hollow beam will help you to do the same in making the input for final design in your group assignment. And then you will go further than your group assignment. You will perform ultimate and sererability limit state design verification of a hollow beam based on current design recommendations. The last part is to employ the find element analysis of a hollow beam and to compare it to hand calculations. This task is optional and can be chosen in contrast to the third group the review assignment. The review assignment consists of reviewing reports of the conceptual design and preliminary design of someone else's group and individual design verification assignment. This way you can skip the FEA assignment if it is not your preferable learning objective and get more insight on design aspects by looking into how other participants did it. To enable you to work on the assignments we created the content around them, not the other way around. The content and the assignments are divided in five main modules and last prep-up module. In the first module we deal with the components of the FEA material introduced flaminates and show some example projects. The second module is about structural design. Within two weeks you will learn about production processes, joins, how to approach conceptual design of structures with FEA. Sovissibility and strength driven design design of the Laminate FEA decks and strengthening of existing concrete structures by using FEA. With the third module we aim to provide the theoretical background of the material behavior that you need to be able to understand design guidelines. We will deal with orthotropic elasticity but also the strength and failure modes. Then in the module four we will discuss the specific behavior of the material, the environmental influences and durability, long-term mechanical behavior in terms of creep and fatigue, influence of production processes and quality and robustness. Module five is about design verification. First we'll look at current level of design codes and recommendations and delve into ULS and SLS verification of cross sections of members, including finite element analysis. We will also look into specific details of design verification and recommendations for the design of bolted and bonded joints. The prep-up module consists of review and final feedback. The content within the modules is presented in form of video materials and quizzes where you can test your understanding. Videos are short but very condensed. We assume that you will be spending two to three times more than the actual video length by revinding, taking notes and etc. Reading materials include more details in certain topics and input data and example for some of the assignments. Time wise it is a nine week journey of about five to six hours of your engagement per week, including work on the assignments but also content quizzes and participation in discussion forums. Here we see the time plan for the assignments. We start with a relaxed first week where the main purpose is to get to know each other and the material in order to prepare for working on real assignments. The group assignment and the content that goes along starts first, then we move to the individual assignments. As mentioned before, there are optional paths within the individual assignments that you can choose to bet fit best fit your learning objectives. Option one is to work on the find element analysis. If you are not interested in that, you can choose to work on the review assignments. You need to choose one option before the first optional assignment start. Assignments are organized in a timely manner because some of them depend on the previous ones. It's square in this chart represents approximately 30 minutes to one hour of your engagement. For example, we envisage that you will need approximately one or two hours to select one of your previous projects and upload the drawing with the short description. You will all work at different times according to your agenda. That is the advantage of an online course. But that's also an danger because you will tend to postpone because of your other duties. There are deadlines to keep you in track. You will see them in the system. When working on a group assignments, it is important to synchronize with other members of your team. For example, use the group keycuff meeting to discuss how you will organize this and which of the available online platforms for collaborative work suits best for your team. We will use the discussion forums that are available in the course platform for exchanging thoughts about individual assignments, journal questions, theory, etc. The assessment of your successful finishing of the course is based on grading of the assignments by instructors and automatic grading of the quizzes. There are dropables in the individual assignments. This means that you do not have to finish all of them. It is another opportunity to fit the course to your professional needs. Please look at the syllabus for more information. At the end of this introduction, I wish you a good start and I invite you to join the social cafe. Discussion forum where you can prevent yourself to other participants and us or just write hello." "Modelica Demo","https://www.youtube.com/watch?v=mHbKb7FNzjI","Modelica is an open source declarative multi domain modeling language, which is developed by the non-profit Modelica association. The two important features of a modelica that make this language, suitable for system modeling, are object-oriented modeling and a thousand modeling. Why Modelica resembles object-oriented programming languages such as C++ or Java? The differs from them, in the fact that Modelica is a modeling language, rather than a conventional programming language. This means that in Modelica, it is possible to maintain a clear separation between the definition of the mathematical problem, the differential algebraic equations set of the model and its numerical solution. The implementation of a system model generally starts with the encoding of elementary some models, which are easier to test. Then, Models of increasing complexity or the final system model can be obtained by just interconnecting these base modules appropriately. As the most later year, for the system model of a simple gas tool, it is also possible to structure the models, so as to create a direct correspondence between them and the physical components or subsystems of the process. The connections between the models resemble the physical interconnections between the components of the actual system. The result of this approach is that the reuse of existing models becomes a natural part of the modeling process, reducing consequently the time to develop a system model. The component models are generally collected in model libraries, depending on the type of application they refer to. The type of model, for instance, at 0d or 1d, or they reuse, for example, if they are suitable for steady state or dynamic simulation. Moreover, an increasing number of open source modelic libraries do exist. Some of them are covering different aspects of propulsion and denogic conversion systems modeling, such as fluid properties computation, thermo-draulic components, or control systems. As I told you before, another key feature of Modelica is that it supports a thousand modeling. This means that the model equations in Modelica are expressed in a declarative way, as formulated on a paper in an unordered sequence. Then, the user is not to write a program, but a model representing a physical system. The resulting code turns out to be more concise and clearer, that is easier to modify and to extend. Here, I'm showing you the code of a combustor model. You can easily recognize the energy balance where the loiting value of the fuel is used, and the balance equations for the termist pieces forming the fuel. Thanks to the causal and objective oriented modeling approach enabled by Modelica, the input alt-wood causality of a model is not to be fixed at priori. Thus, the implementation of each component model is independent from the characteristics of the system, or the boundary conditions of the problem and consideration. This means that it is not necessary to develop different versions of the same component model for each possible combination of its boundary conditions. The choice of the input and the outputs can be automatically adapted to the context in which the model is used. For instance, in the example shown here, the temperature of the flow gases at the outlet of the combustor is specified the input of the system model. If the fuel mass flow rate is instead specified as model input, then the flow gas temperature at the outlet of the combustor becomes an output of the model. Last but not least, thanks to the features of a modelica, the modded developer or user, does not have to take care of transforming the model in a detail stepwise algorithm, as required by procedural languages such as C++, Fortran, Madelab and Python. This operation instead performed in an automated way by the symbolic software of the simulation environment. Carlin-tremodelica is supported by various simulation environments, both commercial, like DIMO, LaDemodelica, tooling, using this video or open source, like opoemodelica. These tools have the task of interpreting the modded language of translating the mathematical model into the medical one, then comparing these rune efficient code and finally, a learning guide. The use of a symbolic manipulator to generate the numerical code is key for system modeling. Indeed, the overrequestion system of a model built through a number-getter-oriented model approach usually consists of several trivial algebraic equations such as the equations used to formulate mathematically the interconnection between the components. When the numerical model is generated, the modelica tool is able to identify a minimal set of equations, which is equivalent to the original modded developed by the user, through a recursive elimination of the three valuations. The result is the numerical model that can be more efficient than an unwritten procedural code. After the numerical code is compiled, the system model can be solved and results can be plotted in the modelica tool or imported in MATLAB or Python for a more in-depth post processing." "Gas Turbine Simulation Program (GSP) Demo","https://www.youtube.com/watch?v=Dsz5Z_6DvLk","Hello, my name is Wilfried Fischer from Delft University and I'm going to give a demonstration of a simulation of a guest urban using the guest urban simulation program version 11 or GSB 11. And double click the icon on the desktop in the program starts. Showing the main form on the top right corner of the screen, the main form contains various depth sheets with component models from which we can build a model. We are going to actually use a pre-configured model so we don't have to build a model in this demonstration. We're going to do a similar task if that is prescribed in the course on basics section to quick simulation session. So we're going to open a GSB project with a pre-configured model. We click the open button. And we go to the sample project, soft-folder in a GSB project, standard folder. These projects are pre-installed with the installation. We open the DJAT model which is a simple model of a simple turbojet actually the J85 turbojet that was designed by GE in the late 50s. So here you see the model panel with all the components configured in configuration in that compressor to burst the trigger and exhaust and also a component to control the fuel flow. All these components are pre-configured with general data data for a design point or the cycle reference point if you will and also with map data to enable the model to simulate off designs performance. To do a simulation we need to define a case. So we click the case button here. We add a case underneath the reference model. And in this case we simply select design as a type of run case. If we now click the run button which is people do a simple design point calculation or a cycle reference point calculation if you will. And in the table one record is added with data for parameters that are selected in the output tab sheet checkboxes in every component. So you can actually define what you want in the output. We can also look at the results in a more convenient way for a single point. If you look at the operating point report here we unduck the form and here we can see a text file. We can compute specialties, mass flows, etc. And the various stations that we have demanded. Output and also the global system performance, thrust, rotuspeeds and specific fuel consumption. So next we are going to make it a little bit more interesting. The case and configuration management tree on the left enables the user to clearly manage its one case is different configurations in a project. Whereas actually different models of various configurations and options in his analysis. So we are going to add another case and we can add it underneath the case here like that. And if we give it for example, O.D. 1 of design one. We can also add one under the reference model and then call it O.D. 2. And we are going to use this one actually. And this will be a single steady state simulation. But of course if we run the simulation now, we would simply have a design point and we have an off design point which is a third row here. And this exactly the same as the design point. That's because we did not specify any off design. Any condition that is different from design condition. So we want to do something more interesting. We're going to be interested in the off design operating point at a few flow of point two kilograms per second. So change it in point two, kilogram number of seconds. That's what we just did. We run again. And now you see. Simulation produce an excellent in the table with different data. Actually you see lower mass flow here. You see lower rotor speed. You can also have a look at the operating point report here. And we see different data. Now of course these single points are interesting, but it's much more convenient if we would have a whole range of points. We're the gradually decreasing fuel flow for example. So we're going to add another case under the reference model configuration. We click the ad case button. We name the case of the series one. And we select the case type steady state series. We want to empty the table the result table because we're not interested in this case. In results from other cases. So we click the close button here. We don't want to save. And the table is now empty again. Next we need to configure the off design simulation input. We go to the manual fuel control. We go to the steady state series tab. And we already see at point one the design point value of fuel flow. So we add a point two. It's value 10. And at point another 10. We want the fuel flow to be point one kilogram per second. We also want to activate the steady state series input by checking the active checkbox here. Because otherwise the simulation will not actually listen to the series input from the manual fuel control. Next we click the run simulation button to start a simulation. Then GST will first do a design point calculation which will be the cycle reference point that is required for the subsequent off design simulation. Then GST asks me confirmation to start at point zero in the off design simulation input that we just specified. So I click OK and then GST will calculate off design simulation points up to point 10. The convenient way to look at the results from a series simulation is of course to use a graph. But first we need to prepare the table a little bit that we're going to look at. And that is by inserting a break between the design point row and the off design simulation is out rows. Because now the graph knows that these things are belong to separate groups. We now look at the graph and then we first need to specify what we actually want to see. We want to see press ratio. It's a function of fuel flow and thrust for example. And turbo inlet temperature. Probably mass flow. The inlet station too like this. And now you see a small D at the right which indicates the design point and the line on the curve here is actually a little bit of design points connected. If it would actually delete the first few points you would actually see the like this would actually see the line become disconnected from the design point. So using a break in a table like this and then showing the graph very convenient. There are options automatically at breaks between series and design points to separate curves in the graph. We're going to give an example of this by changing the simulation a little bit. Comparing performance at different operating conditions. We're going to click the little ambient and flight conditions I can on the top left of the window. And we're going to the off design conditions. We're going to change the conditions to is a plus and adding. The LK de Greece to the ambient stand temperature. Now in general it's a good idea to reset the model back to the cycle reference point to make it easy for GSP to start simulating again at max power. And we do that by resetting about clicking this button resetting the design point. So now if we start the simulation, GSP will do the cycle reference point again, which is exactly the same. Ask me to start at point zero and then we'll do the simulation. This is avoiding GSP getting in trouble to find the base load point or take of power point from a very low point. That is iterating towards take of power from a from an operating point far away from the cycle reference point. Now next before we go to look at the graph again, we want to change a few things in the table. So we are properly properly separate the different series. So we're going to the point where we started the second series and we started with design point. We don't want the design point because it's exactly the same as the design point in the beginning of the table. So we want to delete that record. But we want to have a break between the point zero and the last point 10 there. So we insert a break there and now we will have a look at the graph and we'll see two curves. The one one curve is for the standard conditions performance simulation. And the red one is at plus 30 degrees temperature. And obviously you see the pressure h of a little bit lower starting at the design point almost. Frost is also lower. Of course, to have a little at the temperature is a lot higher. We have the same fuel flow and also the mass flow is lower because of the lower density. We can also look at things as a function of other parameters. For example, as a function of turbulent, let them put in and we see other things like this for example. So there's many ways you can look at the data using the XI graph function in GSP. Next we also want to look at the operating curves in the press. So we open the compressor data entry window. We go to the map tab sheet and then we click the little icon to show the map graph. We see the compressor map. Make it a little bit larger. Here we see the design point already. If we go to the map menu and we say draw the steady state points. We see the operating curve in the map. We can zoom in a little bit. We see that as expected, the operating line in the map for different conditions is almost exactly the same as it should be. The blue line is not connected to the design point because we deleted a few points there. You also see that the operating line is crossing the stole margin in this map. And that is because in the model we did not implement the functions then are actually avoiding stole in the actual engine, which are bleed valves, compressive bleed valves. Okay, so this concludes this demonstration of GSP 11 to simulate a turbojet engine. Thank you for your attention." "How Can Our Approach Benefit Your Company? - Testimonial","https://www.youtube.com/watch?v=M7o7yXl1Vrs","We are a company over a few years old, very good in making cleaning agents. However, we were just making two or three years now those systems by ourselves. So that's very new for us all these techniques, electronics, data and so on. Like I said in the beginning we have quite a big concern around sustainability, thinking of all the impact we have as a company on the future. And I think social responsibility is also something you need to think of. A lot of our products go to healthcare and food industry. And what came out was that the data that we generate can be connected to the results regarding hygiene. So for example, with our data we can see that someone has cleaned and disinfect somewhere or didn't clean or disinfect because those are, I think, more interesting. And there came an ethical issue where we said, okay, if we see, for example, an increase in illness or even fatalities. What are we going to do as a company? We can demonstrate with the data that something went wrong. So that was quite an eye-opener for us. Think ahead, you always try to think in business solutions which is very good. You are very creative. Well, that's what we always try to be. But I think an important aspect is that you also think closer about, okay, but what will be the other impact also from are the things we are developing. And that's something we learned that the big benefit is it gives you and direct, extract dimension in your development process. So yeah, it helps you looking also at other things you've never looked before. Definitely change some things in our products, in our concepts. But there are also, again, opportunities here. So I know that in our case we made the data more open and at one side this prevents of us with the ethical issues. But at the other side, it also gives, well, extra sales opportunities, an extra way of our customers wanting to use our product. Our product." "TU_Delft_Virtual_Exchange_Program","https://www.youtube.com/watch?v=fsReoiDsR-o","As a student, you want to expand your horizons and increase your knowledge. Maybe it's variety you're interested in. Or flexible study options. Or you'd like to dig deeper in your area of study. But find that some interesting courses are not part of your campus curriculum. What do you do? Look no further. With the virtual exchange, you can expand your portfolio. By taking high-level online courses from top-partly universities around the world, and you don't even need to travel. The courses are varied and cover a wide range of topics. This way, you can take subjects related to your own field, but also follow courses and other areas that appeal to you. Inter-resting. Studying collaborate with motivated students of different nationalities and backgrounds and gain different perspectives. It's online. It's flexible. Just make time in your schedule and fit it around your regular study programme. You decide when and where to study. And what's perhaps even more important is that you gain credits for it. Nice! So which course do you want to follow? Go to the Virtual Exchange website for more information." "Virtual_Exchange_Student_testimonial_Palash_Patole","https://www.youtube.com/watch?v=wGpHeZBkP2g","I took the virtual exchange course for couple of reasons. The first is the content of the course itself. So, I am very much interested in astrophysics and after looking at the structure and contents of the course, I thought it would be nice to learn about it. Then also, I have never done a set of one move before. I started with few but never finished them. So, I thought this would be a nice chance to start again and not to give up this time. And also the time norm of the course was such that I was able to manage it with my academics here at Dell. So, I thought I can do the course. I think that I like most is it offers you the flexibility. But at the same time, there are some constraints that are imposed on you. So, you are supposed to finish you lessons every week. But then you have a freedom to do them any time you want as long as you made the deadline. So, I think this brings some discipline which is very important if you want to finish the course and it is in a virtual duration. And also there is an incentive that you get to learn what you want and also encourage for it. The great thing about the course is like the contents of the course were really interesting. And I got to learn from the experts in the field like one of them was noble or creative. But he explained things very easily making sure that the main ideas always convey. And towards the end of the course there were interviews of the researchers in this field who express their view like what will be the future possible developments in this field. So, that gives an idea if one is looking forward to follow up on this interesting subject. I think everyone should give it a try and go ahead and grab the opportunity not to be bounded by your classroom or your nationality or the place where you are studying and study from the experts across the world. Having interest is one of the important characteristics. Then you also need to plan it. You have to finish few lessons every week. So, depending on your schedule you need to dedicate some time. And then you need to follow up on the different practice questions and the questions that you are not able to solve go them over and over again. So, discipline is important. The course was very well structured. It was split into four modules and it was logical to split them in four modules. The instructors were really good. As I said one of them was Nobel or Ed but they still expand things in very simple way and convert the main message. And I also made use of the forum and asked few questions to them and they were really prompt into playing back, which helped me to get the good idea of the concepts. I would definitely recommend for actual extension to other students because as far as I know many of us are curious to know about the things. It is just that we do not have a platform where there is enough flexibility as well as good incentive. And virtual exchange is such a platform." "Virtual_Exchange_Student_testimonial_Omer_Khalid","https://www.youtube.com/watch?v=C_dUUc1PIrE","When I first came to know about the virtual exchange program, then I looked at the course catalog and I found some really interesting ones. And I wanted to take some multi-disciplinary courses apart from my core engineering course work. So at first I enrolled in one course on creative problems solving and then in the next semester I enrolled in two more courses on responsible innovation and management information systems. So I found it really an interesting experience. I would definitely recommend it to other students, especially those who want to take some other courses that are not offered on campus. And also because they are from different universities. So you get to know about different ways of teaching and different perspectives. So that is very important, especially for an engineer also in this time to not just know about his own specialized field, but also to study different disciplines and to learn from especially the technology management domains that can really help him in future as well. I can work on those courses at my own pace. So there were no classes like on campus. So I just had to watch videos and do assignments and some projects. And then there was a written exam at the end of the course. So it was at my own pace. So I can manage it in addition to my own campus courses. That was very interesting on the management information systems and the professor was very good and he also taught us about some career skills also that how can you use the material and in the course for your future career. So the course was very, the course content was very engaging. Courses were very engaging. The interaction with other students was through the online discussion forums and they have their own part on the edX website. So you can contact the teaching assistant there as well. And you can also contact the professors through email. So the interaction was mostly in that online domain. And yeah, I mean I could get help whenever I want. I sent emails to professors and they replied. So that was fine. And the other thing that I found was that the course was not just about theoretical knowledge, but it also prepared me for future career skills. And that is what very important for an engineer in today's world that you don't have to just know about your specialized field, but you should also know about other disciplines. And especially some career skills that can help you in future job applications and interviews as well. So the virtual exchange program also prepared me for that." "Virtual_Exchange_Student_testimonial_Nina_Dinaux","https://www.youtube.com/watch?v=6f81kvmqgpo","I took the virtual exchange course because I like to do some more courses next to my study and to take a broader view of my studies and choose my own subjects that I want to study. Then I can choose my own speed that I can study the course if I have a week off, I can do a lot of it and I have a week of my own studies, I can just leave it somewhere and then I will take it up later on. Quality of course was very good actually and I like that it was a lot of material that we could watch or read and the quality of the teaching materials was very high so teachers made a movie from their teaching material and they would show it and it was very good actually. It was nice to be in a virtual classroom because you had a discussion board for example and you could have discussions on this board with students from other cultures and with other perspectives so you could have a lot of discussions or questions that you had and you could contact other students while you were studying in the virtual class actually. I liked it because it's easy to access every all the files and documents and the learning materials and I like the videos and you can just re-watch it if you want to. It was a discussion board and in this discussion board you could always ask anything and other students would comment on you or you could have a discussion about the certain subject. There were some peer reviews so we had to assess other students for the assignments. And there was one final exam where we had to write a thesis and yeah eventually we will get some feedback on that as well. I would certainly recommend it to others because it's such an easy way to see a lot of other courses that you normally won't see I guess in your own study and it's so easy and it's easy accessible and flexible and you will see a lot of other yeah you will talk with a lot of other students from different cultures and with which we have different perspectives." "Virtual_Exchange_Student_testimonial_Prabhav_Manchanda","https://www.youtube.com/watch?v=Savze6Fbmmg","I took a course in virtual exchange because it gave me an opportunity to study a subject which I have always wanted to study and through that knowledge I was able to get a very nice internship and it helps my future. So the main motivation was I really wanted to study the subject because it wasn't part of my curriculum here. So this was the best opportunity that I thought and also the great thing is that it actually gives you credit. So I'll be getting seven credits for my course that I did. The thing that I like the most is that the quality of teaching was excellent and one of the teachers, one of Nobel Prize and also I get an experience of international university as the course was given by the Australian National University. The best thing is that you can watch the videos from anywhere. It's very easy, it's accessible if you have any doubts you can talk to the professors, you can email them. So it's very nice. The best thing about virtual exchanges is that you actually don't go there but you're there because it's virtual exchange. You get to talk to a lot of new people from other universities. The thing is in virtual exchange you get to know a lot of students from other universities although you're not there but still you can get in touch with them through the platform. So that really helps you to get a nice perspective from other universities and other branches and also you can also take up a subject which is not there in your curriculum. The first thing is definitely go for it because it's a very good opportunity and the second thing is that if you want something apart from the knowledge that you're getting here, it's a brilliant platform for you to go and try out." "AE4263 What Why How video","https://www.youtube.com/watch?v=NDhpAOFnFyM","Let's get to the point. Now we're going to see what are we going to model or to learn how to model in this course. Why we're going to do it and how we're going to do it. First of all, the what question, all important. In this course, we are going to treat the engineering models of systems and processes. Of course, you can think of an engine or a power plant as a system. Why we're going to do that? This will guide our modeling effort. We're going to do it because of a purpose we have to define very precisely the success of our endeavor depends very much on how precisely we define the purpose. And of course, that requires defining what we are going to model. In general, we humans have models of things or representations or realities when we want to understand how it works. And then how we're going to do it, we're going to do it with a different or a set of mathematical models. And of course, we will go quite deep into that. And thanks to the use of computers that allow us to run the models of tank simulation results and understand from them. Now, before we start, I need to tell you some definitions. The definitions are a little bit boring, but bear with me. I will read them out loud for you and comment on that such that they become clear. What is a system? A system is a group of independent but interrelated elements comprising a unified hole. It is physically defined by its boundary. A bit abstract. Let's see what it is. Let's think of an aeroy engine, for instance. An aeroy engine is a system is formed. It forms a unified hole and it's formed by components, the compressor, the combustor and the turbine, which are all tightly interrelated. Of course, in order to define a system, we have to define where it starts and where it ends. For instance, we might be interested in modeling the aeroy engine, but we do not care about the wing or the aeroplane. We just develop a thermal model of that system. What is a process? A set of physical chemical, sometimes biological transformations of material and energy and a set of operational procedures implemented in a controlled system. What is a process? Let's think for instance, of a power plant based on the steam ranking cycle, a process can be steam that evaporates. That's a physical process. It is a transformation of material and energy. Of course, in energy engineering, we do not leave the processes go wherever they want, but we have to actually tightly control them. Think of a combustor and the combustion that occurs within it. We definitely need to control it to avoid, for instance, explosion. What is a model? A model is a simplified representation of a system or process. In time and our space, we will see that in detail later, intended to promote the understanding of the real system. I think this is quite self-evident. As of now, we, for instance, develop the model of an aeroy engine to design it, to understand how it works to improve its performance. What is a simulation? The use of a model in such a way that it enables the understanding of interaction, otherwise he'd done. Now, you can easily understand that the complexity of what happens in an engine, in a power plant, in whatever, is so huge that by just staring at it and even having a very good physical understanding of what happens, we would never be able to tell what are the relations among the different variables. We do need mathematical equations for that. Now, I already mentioned some examples. Let's go through a list of things you will learn how to model in this course. Of course, engines would they be flying or on earth, truck engines, turbofan engines, even rocket engines, and I would advise you to try to save some minutes and think about other examples yourself. Maybe those who motivate you to further learn about mathematical modeling. Examples of power systems, you see them listed there, might be you already heard of fuel cells, but maybe you didn't hear about very exciting technologies that we are studying here at two depth, supercritical carbon dioxide, turbo generators. You'll hear about that more later. In general, these can be classified as thermal energy conversion systems. In that sense, the things you learn in this course are not limited to power and propulsion, but you can think of other applications whereby thermal energy is converted to obtain a certain defined purpose. Some examples, I hope they will get you excited. What you see here is a state of the art, turbofan engine. It is an extremely complex and possibly the most efficient machine that we have ever devised and obtained. On the right hand side, you see the graphical user interface of a software you are going to learn how to use in this course. It's called GSP, the Garse simulation, program, gastroabbing simulation, and some of the elements that compose always the software, like for instance, the process flow diagram that call her scheme. Another example of gastroabbing in this case, the rest of the one, this is also a wonder because it achieves incredibly high efficiency, thanks to that bulky element you see on the top of the gastroabbing, which is called the recuperator. And again on the right hand side, you see the representation in GSP of such a system. As I mentioned, you can also end up modeling systems that do not provide propulsion and power, but thermal conversion systems. And this is out of some research we are doing at the moment on the environmental control system for aircraft, which is also a very heavy energy consumer onboard, and therefore there is a lot of interest in making it efficient. And this is the graphical user interface of another program that has been developed with the purpose of studying and improving this kind of systems. When we come to terrestrial applications, something I am very fond of is the organic ranking cycle turbo generator. This is a turbine system for the generation of electricity that can use as primary thermal energy source renewable energy. Therefore you understand it is extremely actual and compelling. A fuel cell is also a very modern energy conversion system based on electrochemical reactions. It can be used both on board of an aeroplane, top left of the chart, or as a power plant on the bottom left of the chart. And you see here the graphical user interface of a modellica model and you will hear a lot more about modellica in the coming modules. Now, why do we do modellin? Let's define some types of engineering problems. We do it of course when we need to design a system. It is therefore the first thing we do when we want to analyze the feasibility for instance of a normal concept. But actually the art of modellin got to an extremely sophisticated level. Nowadays in some areas one can claim to develop a virtual prototype avoiding the need of actually building hardware but relying on a computer for very accurate predictions. We of course care a lot about pollution when it comes to propulsion and power systems. That has to do with sustainability and we can develop highly accurate models to predict the emissions of engines. All important is the control of systems. For instance you can imagine how important control is in the case of emissions. For this reason we developed models, system models that allow us to conceptually get to very efficient strategies for the control of systems and manage operation, regular operations start to option down or even emergency situations. Trouble shooting once the system is realized very often especially if it's a new system, there are problems and models can help us understand if there are faults how to solve them, how to solve malfunctions. Other types of modeling. Safety is of course extremely important imagine on an aeroplane. Models can help us prevent hazardous operation by understanding those complex reactions that happen when an asartons a situation realizes. They can help us also in the unfortunate case of an accident to try to understand what cause the accident or to estimate the effects of accidents. Models are of course very important in that. Operator training all these systems require highly trained operators in order to properly have these propulsion and power systems work. Imagine the aero engine the pilot needs to understand how it works and of course needs to be trained to all the operations that are needed for proper functions. For instance they need models and training operators to start up and shut down the engines properly or also just for properly operate the system in normal conditions. But very importantly they need to understand how to react in case of an emergency. And of course that is not something that can be reproduced in reality. Again think of the example of a pilot on an aeroplane. You do not want to train the pilot in an emergency situation by itself but you would like him to have a simulator and tell a train him to react to an emergency situation in a non asartons condition. And as I said before also it is important to study the environmental impact assessed the emissions through models is also very important. Now let's have a look at the Y question. We apply models throughout a project. You see there the time span of a project and each phase comes with its special type of model. At first we have to design the system therefore we need a model for system analysis in order to get probably the optimal solution for the given problem. In a second phase of the project whereby we need to design the control we need a model in order to test different concepts or different strategies to control our system. Once we have realized the system initially the operation will not be optimal. And again we can rely on a special type of models to fine tune the operation of the system and get to the right parameters of the control system. Finally once the system is deployed in the field again we can use model to further optimize the operation of the system. Now let's have a look again within the Y question to correspondents that there is between a set of people that we called the users and another set of people that we call the developers and probably you can become one of the developers or one of the users by attending these scores. The users are those who provide the requirements for models and you can see we can go from the level of development at the very bottom of these two triangular charts where you have researchers developed developing very sophisticated models that can be used by manufacturers or other researchers to come to a new prototype for instance. Then you have the testing of this prototype which requires a different type of models. Again the maturity of the software gets to a higher level. And finally you have the actual deployed system. The users of the deployed systems are the engine operators or the airframe designers. And again you have an even more mature level of software which is actually most often commercial software which is sold and maintained and interacted about with the users of this software. Now with respect to the how question we used to say that the modeling effort is usually an iterative one and you see here an example of what we call the modeling loop where this iteration occurs. First of all in the top left of the chart you have a real world problem that helps you to define the requirements from which you develop a mathematical model that you implement into a software you run simulations you obtain a solution with very sophisticated visual tools that allows you to interpret and analyze the results with which you should solve the problem. Now unfortunately very often one iteration is not enough to solve the problem because you have to go back to square zero or to one of the other squares and improve your model until you get to the solution of the problem. What are the possible approaches to modeling? I have devised three categories well this is my personal subdivision. In this course we will be focusing on physical equations modeling. Our models will be based for sure on conservation equations of which you see an example written out there. But sometimes as you remember from the very first slide of module one, models are too complicated and we do not have physical equations for all of them. This is why sometimes people resort to actually experimental data that are needed in order to obtain a reliable model. Sometimes and we can use still very sophisticated modeling techniques like linearized model and control theory in order to obtain a set of equations which we usually call transfer functions that still allow us to predict the outcome of a certain input based on experimental data. This is the case for instance if the model is linear and you might have had some of that in previous courses. Finally there are cases in which the situation is just too complex and this was the case for instance many years ago with fuel cells. The electrochemistry was so complicated that the only way to understand how they worked was to actually build a scaled model and measure almost about everything. Now in all of these you understand computers play a major role and that role has to be understood very carefully. A computer is used for model development for simulation and for analysis in all those these phases. It is always at the base of the what if analysis we do in order to be able to get to the solution of the problem. We need specific software to model the system. It can be a pre-programmed commercial software or something that we learned to develop in this course and here a very important word the caution be careful that the powerful CPUs of today can induce mistake through a faulty behavior which is trial and error. You always want to avoid continuously trying things without understanding what is happening there. That is a shoe recipe to waste a humongous amount of time. With respect to this I would like to stimulate to read a PDF document we have posted on right space which will tell you better what I mean by that. Have we succeeded in getting artificial intelligence so something that is independent from human thinking I do not think so." "AE4263 Course Introduction","https://www.youtube.com/watch?v=b-qeqqwBMGY","Welcome to this course. It is about modeling and simulation of energy conversion systems. We have put a lot of effort into it, so I truly hope you will like it. Here are the lectures. One is of course myself. My name is Piroucolonna. I am the professor of propulsion and power at the aerospace engineering faculty. I've been dealing with energy conversion systems for a long part of my career. And in fact I thought a course on this subject already in the past. I will be lecturing the first two modules of this course. Then we have Dr. Francesco Casella. Thank you Pirou. I am Francesco Casella. I work at Politecnico Milano. I am a control engineer. Throughout my career, I have been mostly focusing on dynamic modeling with an eye to control. I have been doing research and object oriented modeling and modeling and application of this to control design. These are the topics I will teach you throughout this course. And then it is also my pleasure to introduce to you the Vittford Vissert. Hello, my name is Wilfried Fissert. I am a part I am lectured at the University on gas turbine performance and simulation. Outside my work at the University, I have 30 years experience on gas turbine design, performance and simulation. I am also the developer of the gas turbine simulation program GSP, which is the tool you will be using in module 10A. And last but absolutely not least, we have Dr. Carlos de Cervi. Hello, my name is Carlos de Cervi and I am a researcher at the Flamist Institute for Technological Research in Manchew. My research work focuses mainly on simple generators and simulation design methods for energy and commercial systems. Indeed of particular relevance for this course is my expertise in simulation modeling of energy and improviser systems. And then I would like to thank Francesco, Wilfried and Carlos. I would like to start with something that I hope will inspire you. You probably recognize the engineer in the picture there, Leonardo da Vinci. And one day I found a quote that I find quite compelling and I would like to share with you first in Italian because it sounds almost like a poem. I would like to say that in the words of the pratica sense of chances, I would like to say that in three years, in the same way, I would like to say that in three years, I would like to say that in three years, I would like to say that in three years, and then my attempt of translation for you, those who fall in love with practice without knowledge, are like a hellsman who comes on board without helm and pompous and never gains certainty of where easy heading. Think about it, it has a lot to do with what you are going to hear in these course. What is the context of these course? It is modeling and simulation and you have been exposed to these concepts for a long time now and I'd like to focus your attention on several aspects that are quite general. Modeling can be an activity that can be assimilated to obtain information from something of which you know very little. That is what we call the black box. And I would say that economics is close to that because it's based on things that cannot be naturally converted into mathematical equations. Think of course, for instance, of political opinions. Physiology is a little greater in the sense that we do have some tools to model what happens in biological systems, but still the level of uncertainty is extremely high and another system that is extremely complicated, though arguably a little less than physiology is ecology and the study of the ecosystem progress a lot in recent times, also thanks to the power computers that we have at our disposal today. Chemical processes is something that gets a little closer to what we are going to tell you in these course and still when we think of chemistry, very often that is extremely complicated and we have to resort to a statistical approach. Much more deterministic is the modeling of power plants, for instance, and talking about this course of power and propulsion systems in general. And in some instances we know enough to say that we are closer to what we would define a white box. You will get practical information about this course on bright space, so actually it will not spend much time on this now, please consult the website. What are we going to see in this first module? First of all, I would like to clarify what are we modeling and why which is of course a very important question to be answered. This has to do with the purpose which you will learn really guides the modeling effort. A good engineer should always know very well why he is making this huge effort of developing system models. I will then describe to you the role of models within the context of this course propulsion and power systems. I will tell you about several different model modeling paradigms and of course about applications. Things you can do with the models you have learned to develop through some examples. Then I will also briefly describe the tools that we are going to use in this course. They are both conceptual tools but also software tools. And from the very beginning, I will tell you about these nine steps method that we have devised and that will help you to consistently approach the problem of developing models. I will introduce you to the Norman Fletcher and units we use in this course. And I hope you will like a first simple example in which you will see all the ingredients that I listed before." "Software Testing in Java - Introduction to JUnit","https://www.youtube.com/watch?v=GwvT4so_v8Q","Hi, so now we're going to get started on the automation part of software testing. In the previous videos we thought about testing using our human intelligence and now it's time to make the machine to run those tests for us. And we're going to use JUnit, which is the standard way of writing unit testing Java. And so let's get started. So we have the Roman numeral problem and this is my implementation. You don't really need to understand it now because we are now using the testing head. But if you really pay attention or if you quickly pay attention, you can see that my implementation is about getting the current number, then I look to the next number and then I decide if I should add or if I should subtract because of the subtractive notation that we discussed. But now let's testers, we just want to make sure that this works. So what I'm going to do is create a different class, so the Roman numeral test. And I put this in the test folder, usually, that you can see on the left side of my intelligent J. So in the Roman package, I have the cluster and it is empty. So let's get started with JUnit. How does this work? It is very simple. We basically write methods and these methods will do the test for us. So the first one will be, I want to try a single number. So I create a method. It has to return void. And it has to be annotated with attest. And I'm going to import this from org. JUnit, jupter API because I'm using the JUnit's 5 version. And this attest indicates to JUnit that this is attest. Then the first step is I need to instantiate the class I want attest. And in this case is the Roman numeral class. I'm going to start this into a variable and I'm going to call it just Roman. Second step is to invoke the methods we want attest. So Roman.convert. And I'm going to pass now a single number. That is the test I'm doing right now. So I, for example. And I know that in Roman numeral, this result, it needs to be equals equals to 1. But then I'm going to write using the JUnit functionality, which is assertions.assert equals. Then I pass what I expect, which is 1. And the variable that stores the result, which is named result. So take a look at this because all your unit tests will look like the same. So we basically think about one of the cases. In our case, we started with a single number that we discussed in the previous video. I invoke the method I want attest, passing this data. I get the result. And then I assert, and that's the verb we use. We assert that the result is as we expect. So I need to be one. IntelliJ already knows how to play with J unit. So as you can see, there is even this green run button in here. Run test as it says. If I click on it and then I click on run single number, IntelliJ will automatically run this test for me. And you see the green result at the bottom of my screen. So this means that IntelliJ executed this method. It compared the result with the number one. And they were the same. So this means for this behavior, my software works as expected. So we see that our tests are green and then this means we are happy. But one test is not enough. In the previous video we came up with many. So let's go to the second one. I'm going to minimize this and I'm going to write the next. So at test again, avoid. And the second scenario I'm going to automate is when I have more than one digit. So like VI for example. So number with many digits. And this is a nice information. The name of the method doesn't matter. So as developers, we use these to try to express what we want to test. So number with many digits is my case. So I'm going to again create the Roman numeral class. That's the class I want to test. I'm going to invoke them method. And now I'm going to pass for example, VII. Let me start this in a variable. I'm going to start this in the result. And I know that VII in Roman is eight. So again, assertions are third equals eight result. I have a button here. I'm going to run it again. Or actually for the first time, this is the first time I run this test. And wow, it is red right now. And this means that j unit executed my test. But then the result was not eight. And this is a bad sign. This means that I have a bug in my software. And as a developer, this is actually a good thing because I found this bug before sending my software to production. My final user didn't see it. So what we need to do is to go back to the source code and find the bug. So pause this video and try to find the bug. OK, I'm back. And the bug is actually here. So it shouldn't be just greater than, but greater than or equals. Pause this video, understand why, but that is the bug. And that's a common bug. We have been discussing that developers do lots of these kinds of bugs. And yeah, this is just another one. So I fixed the bug. What I usually do is to run it again to see if I really fixed it. Result is green. Another tip for you as a developer. I'm just running one test as you can see in my left bar. But it's nice if you run all of them. Nice. The two testes are passing. Let's just write one more. So at test again, avoid. Now I'm going to play with a number with a subtractive notation. And if you remember the subtractive notation, is the thing in Roman numerals that you need to put the numbers before. You're going to see it now. So I'm going to create the Roman numeral again. I'm going to convert and the number I'm going to try is IV. Let me start this. So IV and then this number is four. So let's make sure it is four. And we do this by means of assertions. So the result needs to be four. And this is actually the order that you pass the parameters. It seems not intuitive, but it's what you expect. And then what you calculated. So four result and not result four. Let's see if our software works. Running the tests, the O-run and also pay attention how fast it is. IntelliJ says that it took 27 milliseconds to run. This is definitely faster than a human. OK, so the three tests are green. And this means so far our implementation seems to work. And what I want you to do now is to continue writing many tests in the previous video, we thought about many of them. Your test is to continue. So this is a very simple introduction to J unit. Throughout the course, you're going to see more features. J unit is an amazing framework. Go look at the documentation. But this is what we will do from now on. So we're going to discuss how to think about tests using our intelligence. The human part of doing tests as we discussed. And as soon as we have a test in hands, we're going to automate it. Always, always, always using J unit. Yeah, ready for it? See you in the next video." "Software Testing in Java - Mockito","https://www.youtube.com/watch?v=_0uqsgmy6CI","To apply mock objects, we're going to use Mokiro. Mokiro is the most popular Java framework for mock objects. And I really recommend you to go to the Mokiro's website as the documentation is very complete. And Mokiro is a very powerful framework. So if you want to master it, you will have to read the documentation. So what we're going to do in our test is to get rid of the real database and mock it. So let's just start by completely deleting the database access part from the test. Then now that our test doesn't use the database class, we introduce Mokiro. And the first thing we do is to tell Mokiro which class it needs to mock. And in this case, it's the invoice database, the one we want to mock. So the mock method from Mokiro receives the class we want to mock. And you see the dot class because we're getting the definition of the class from the JVM. Mokiro returns the same type of the class in this case, invoice data access object. And this is very useful for us because it means we don't have to change our code that much. Because the mocked class has the same interface as the real class. Of course, from Mokiro to do it, there's lots of magic going on behind the scenes. But for us, consumers of the framework, we can just take advantage of it. The second part, as discussed in the previous video, is about setting the behavior we expect. And for this mock, what we wanted to do is to return a list of invoices when the method all is invoked. And this list of invoices can now be just an immemorialist. We're a simpler than a database. So that's what we do here. We create an array using the array.slist method from Java. And we add the tooling voices, I1 and I2. And then we use Mokiro.when method. And when the all method happens in this mock object, then returns the list that we just created in memory. With these two lines of code from Mokiro, we now have a class that simulates the database object. And this mock object is able to return this in memory list when the all method is invoked. That's nice. We're almost there, but now we need to do changes in the production code as well. And I'm going to discuss more about it in future videos, but changing the production code to ease the stability is something that we will do quite often in practice. So if we go to the production code, we need to get rid of the instantiation of the database object class. We cannot do this anymore because if we do this new there, this means we are always going to use the real database. And that's not what we want. We want our code to use the mocks when the program is being tested. And we want the program to use the real class, the real data access object when the program executes for real. A common way to do so is to receive the class we will mock as a parameter of the class. And we usually do it in the constructor. So as you can see in this code, the invoice filter class now has a constructor receiving the database object. And this is nice because due to the polymer phism of the Java language, we can pass any class that is or inherits from invoice data access object in this constructor. In other words, I can pass the real one, the one that I want to be executed in production, but I can also pass the mocked one just for the tests. So our production class is now more flexible. This pattern has a name and it is called the dependency injection principle, which we'll discuss more about it later. We are almost there. So if we go back to our tests, we just need to then pass the mock via the constructor and in terms of implementation that said. What happens is as soon as we execute a test, the filter method invokes the all method in the data access object, which is not a mock. And this means that this all we return the least we created in the tests with the two invoices. And now we completely got rid of the database and we are able to test our invoice filter class without being bothered by the complexity of having a database in our tests. This is again a unit test. And because of the mock object simulating the database, we can now explore bad weather, corn cases, and write several test cases for this class in a much more easier way. This is what mock objects are about. They help us simulate the behavior of classes so that as testers, we can really focus our energy on the class we want to test and not on its dependencies." "Aeroacoustics: Noise Reduction Strategies for Mechanical Systems - Online Course Introduction","https://www.youtube.com/watch?v=VHdfu0KfS4w","Mechanical systems are everywhere and although they do a lot for us, they can also produce a lot of noise. We are at the Delft University of Technology, we are training the new generation of scientists and professionals to build sustainable and quiet machines. We are now ready to offer our knowledge to you, the working professional. Training a proper balance between noise output and aerodynamic performance is not an easy task. There are lots of regulations about noise pollution which are enforced by global medigencies around the world. Specializing in aerodynamics and aerochistics is really challenging. That's why we offer these online courses so that you can get ahead in these fields. In just a few weeks, you will learn how to translate complex aerochistic theories into practical design applications. Your creativity and experience will also help you develop innovative noise reduction strategies, improving your career prospects. We start by reviewing the physical principles behind sound generation and what parameters influence noise production. This knowledge is then translated into practice, using both exercises relevant to industry and online simulations. This will help you optimize the performance of a mechanical system. Throughout the course, you will receive personal feedback from our international experts who will also discuss industrial regulations and equipment. After taking these courses, you will be able to analyze and reduce the noise sources of your system. To make your product as efficient and quiet as it can be. Sounds interesting? Find out more on our website." "Hyperloop: Changing the Future of Transportation - Course Introduction","https://www.youtube.com/watch?v=fOBtxgUw6eg","Imagine the world connected like a metromap. The high-pollup may have the potential to realize this. In this MOOC, the Delft High-Paloup's student team will teach you all about the basic concepts of this revolutionary mode of transportation. How does levitation work? How would you prepare the high-pollup passenger partner? And how would it break? What would a high-pollup journey be like? Through lectures, exercises, discussions and challenges, you, the learner, will be equipped to answer these questions and to come up with new concepts. Whatever your background, this MOOC will give you the tools to understand this new mode of transportation. The new MOOC will be the highest-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-class-" "TU Delft Leadership Essentials for Engineers - Program Introduction","https://www.youtube.com/watch?v=-yunRA8x06U","As an engineer, you have the great advantage of having an analytical mindset. You are comfortable with figures, you understand technology. But suppose you are leading a large engineering project with many stakeholders involved with conflicting interests. Maybe you are already in charge of large engineering teams and you are expected to lead the company's product innovation. Are your engineering skills enough? How will you establish your leadership? Promote your vision. Delph University of Technology's new program, Leadership Essentials for Engineers, was designed for engineers who are in or are preparing to take on management and leadership positions. The approach of this program is unique and is based on developing three mindsets. The first mindset is the analytical mindset. You are already a very analytical professional, but in a leadership position you will face new responsibilities and tasks. In the first course, you will learn to solve complex problems using analysis-based decision-making. The second mindset is about influencing people. As a leader, you need to know how to convince internal and external stakeholders who may not share your view. In the second course, you will learn how to stay holders play the game, what strategies they use, and what strategies you can use to align these stakeholders. The third mindset is about communication. As a leader, you need to be able to communicate your ideas and strategies in order to gain support from your people and support from your external stakeholders. In the third course, you will learn how to reduce the complexity of your world to a concise, powerful, and maybe even inspiring message. During our program, leadership essentials for engineers and collaborate with like-minded professionals to develop and enhance your leadership skills and become a successful and effective leader." "Design your Next Career Move - Course Introduction","https://www.youtube.com/watch?v=gCqPpwdkvuY","If you're busy doing, how do you know if what you're doing? Truly matters to you. Create some space to create a better tomorrow. Define your challenge and look at things from a different perspective. We want you to feel engaged, energised and empowered. There are infinite possibilities and one to follow. So we invite you to investigate, ...and iterate. Don't wait. Design your next career move." "FRP Composites in Structural Engineering - Online Course Introduction","https://www.youtube.com/watch?v=_84SFqZHvjA","Making a bridge out of reinforced plastic? Think of it. It's light. It's durable, cost saving. And you can learn all about this online. Fiber reinforced polymers or FRPs are making a breakthrough in architecture and structural engineering. They're increasingly being used to save time and money and enable innovative designs. This hybrid steel trust FRP composite deck bridge was built over one of the busiest highways in the Netherlands. It's three times lighter than its 1200 ton concrete alternative. Because of the lightness of the material, the entire deck could be pre-assembled. Then the whole bridge was put in place with the roadway close to traffic for only two nights. And there was no need for concrete hardening or welding on site. Disruption to traffic and the costs of foundations equipment and transportation were automatically reduced thanks to FRPs. Would you like to be able to take advantage of these benefits? Designing and building with FRPs requires you way of thinking. New skills and knowledge are prerequisite for successfully using the layer and orthotropic material that is FRP. Two-delft is one of the world's leading institutes where new computational methods and innovative technology for FRPs are developed in close collaboration with industry. In our online course, specially aimed at professionals interested in designing and building with FRPs, you will learn a lot about specific structural and short-term and long-term material behavior, design guidelines and manufacturing techniques for the infrastructure and buildings. You will have 24-7 access to course content, which includes video lectures from industry experts and array of online material and a calculation tool that can help you formulate project details. If you'd like to apply what you learned straight away to your own projects, you can bring in your own case and receive personalized feedback. Would you like to benefit from one of the most exciting materials in building an infrastructure development? Discover more at tudelf.nl-frpcourse." "Opening OE Global 2018","https://www.youtube.com/watch?v=TsiYZLy_QXk","Hello, my name is Willem Favaukmer. I am the conference chair of OE Global 2018. We are busy with all the preparations for the conference in April. We have a couple of exciting things for you in store. First of all, the professional program is online. As you can see, it is packed with many parallel sessions. Second, we have some very interesting keynote speakers on the program, and the conference will be opened by our Dutch Ministry of Education. Third, we have great social events ranging from an opening reception in a Royal Delph pottery museum, a conference in a beautiful museum, Prince of Houth and a visit to Madurodom. And after the conference, you can stick around and experience a Dutch king's day. So go visit the conference website and don't forget to register before March 1 to take advantage of the early birth rate. We look forward to seeing you in Delph Denebron." "Multi-stakeholder Strategies: Analysis for Winning Coalitions - Course Introduction","https://www.youtube.com/watch?v=N9eN0i602sQ","Great results are often the fruit of a successful cooperation. In today's world partnerships are critical. Successful cooperation may look smooth and easy from the outside, but it is often based on a longer process of trial and error, a conscious strategy, and most importantly, careful preparation. In successful cooperation, the partners know the other actors in their environment. We'll realize which actors are critical, and how they can work towards mutual success. In our online course, multi-stakeholder strategies analysis for winning coalitions, you will learn how to analyze your own situation as a game between strategic players in a network. It's a hands-on course, where you learn how to apply actor and strategy models using dedicated software. This helps you to plan your next move, to anticipate responses from others, and to know what outcomes to expect, and how to build arguments that might persuade others to join your quest. Together, we will model your multi-stakeholder environments in order to formulate effective strategies for building winning coalitions. In role now, Join us online at Delft University of Technology." "Invitation OE Global 2018","https://www.youtube.com/watch?v=037uMUBEck8","Hello, my name is Anka Milder and I'm the Vice President of Delphet University of Technology. My university has been active in open education for many years and for that reason I'm really proud that to you Delft will be hosting the Open Education Global Conference in 2018. So please join us and share your ideas about transforming education through open approaches. So what is your best idea on open education or perhaps what are the challenges you are facing and you want to solve? The conference will take place in a beautiful and historic city of Delft. But you may also want to visit other cities in the Netherlands such as Amsterdam or Rotterdam or the Hague. And these are all very close by. So be inspired, share your ideas and I look forward to seeing you on April 24th, 2018. In Delft." "Open Data Course: Visualization and Analysis Tools","https://www.youtube.com/watch?v=3e73XVaHEuA","Data visualizations are often used as a means to communicate complex information to the public in ways that cannot often easily be done with words. So what's involved in the creation of these? Today we'll look at the different types of steps in operations that are often used. Hi, I'm Chris Davis and I'm an assistant professor at the University of Phronian. In this video we look at the steps that are used for data visualization analysis. More specifically, after this lecture, you should be able to describe what these are. It have an understanding of how you could apply these steps in your own work with data. When you do data visualization and analysis at a very basic level, you have to get the data, be able to process it and create the visualization itself. There are often a lot more steps than this, and a good overview of what often has to be done is provided by Ben Fry in his PhD thesis on computational information design. In this he talks about the steps of a choir, parse, filter, mine, represent, refine and interact. The link to the original PhD thesis can be found on the platform. We're going to more detail about these steps in the next few slides. While you can think of these steps as being part of a linear process, or once you're done with one step, you can move on to the next. You can also conceive of situations where each of these steps can loop back and change the output of the previous steps. For example, while people are interacting with visualizations, they may filter out a different subset of the data, or select options that alter how the visualization represents the data. People may want to even load in new data sets, given something that they discovered using these later steps. This whole process of creating visualizations is a mix of science and art. Fry argues that this whole chain of steps draws on skills from different backgrounds. As you progress through the different steps, you're using knowledge from fields such as computer science, mathematics, statistics, data mining, graphic design, human computer interaction, information visualization. To start, you of course need some way of acquiring the data. When acquiring data is generally good to automate the process as much as possible. As best you if you expect for the data to be updated in the future. While you can just go to a website yourself and download the data, you can also set up a program to download the data directly from a URL, which allows you to easily repeat the process in the future with new data. For the next step, parse, there needs to be some way to read the data so that the computer can understand it. There are actually several components to this. First, the format of the data, second, with the data means, and third, what the data really should mean, but doesn't currently do to errors. Regarding the format of the data, there are different file formats that the data may be written in. A key thing to remember is that in whatever programming language that you're using, there should be some library or piece of software already written to help parse in the data from common file formats. The next thing to be aware of is what the data actually means. Specifically, there are numerous data types that represent things such as text, dates, numbers, and geographic coordinates. One thing that you will likely experience is that parsing dates will sometimes cause problems. For example, a date that looks like this, 8126, may accimine 128-2016. Depending on you if you expect the day or the month to appear first in the date. Date such as August 2016 may also be ambiguous if the computer expects a day of the month to be specified as well. Even if the data can be incorrectly interpreted, there may still be issues with incorrect values in the data, such as inconsistent units of measurements being used. For example, you may find data that has listed units of millions of euros, but also in euros, for maybe easy to spot the values in euros, since there is so much larger than the rest of the reported values. One way to deal with this would be to set up your program so that the spot sees issues and automatically fixes the data. If there are a lot of diverse types of issues in the data, you may want to use a free open source tool, such as open refine. The developers of open refine describe it as a powerful tool for working with messy data, cleaning it, transforming it up from one format into another, and extending it with web services and external data. In practice, this is a very useful tool for dealing with data that needs a lot of cleanup in order to be useful for later visualizations. On the platform, you will find a link to the open refine project page in a tutorial that demonstrates the different operations that you can do with it. The next step is filtering, and this is relevant if you need to extract some subset of the data instead of using all of it. The key question to think about for this is what attributes of the data need to be filtered on. For example, are you looking at categories or classes, numeric values, ranges of dates, geographical areas, or a combination of multiple features? For the mind states, filtering may not be enough, and we may have to use various statistical techniques in order to find patterns of interest. We may be interested in highlighting outliers showing what average values are, or identifying what appear to be clusters in the data. What is shown here is a part of a reference sheet for the Deplier Library for the R programming language. What discusses this in more detail in a later lecture? As you can see, this provides many functions by which you can summarize data using statistical functions. This also allows you to join data sets together based on matching variables among many other features. For the represent stage, it's a question of how you would like to visualize data using different techniques. For this we show a reference sheet for the GG plot to library for R. This highlight several techniques that can be used for plotting one variable or two variables. Just from this you can see different examples, such as histograms, bar charts, scatter plots, stacked area charts, density charts, and so forth. For the refined step, this is a question of how to change the visualization to highlight things of interest, and this step can also relate to the output of the previous steps. For example, when selecting a subset of the data, you may want to make the rest of the data more transparent in order to de-emphasize it. They also want to create objects that become larger when you place your mouse over them. The final step of interact applies if you do endonamic visualizations instead of static ones. There are many free open source options for software that can help to create these. If you used a programming JavaScript, you can use libraries like d3.js. For our programmers, there's a shiny package, which allows you to write your data processing code in R and have an interact with a dynamic webpage. The interaction with the data should allow the user to explore some complex issue in a way that helps them to understand it better. A good example of this is a visualization that gives people options for balancing the budget of the US national government. The links to this can be found on the platform. This is a very complex issue where they mount a media coverage of budget item receives, doesn't necessarily correspond to its overall size. This visualization helps people get more insights into the complex trade-offs that have to be made and helps them to understand some of the difficulties that are involved. All of these steps that have been mentioned are part of the storytelling process, which is to large extent what data visualization and analysis is a part of. There are some of very famous examples of this. Every laystruck cholera epidemic in London around the 1850s. At that time, people thought that cholera was caused by bad error. A physician named John Snow decided to actually plot the homes of the cholera victims and the locations of the water pumps. In doing so, he was able to locate the water pump that the disease was spreading from. Another famous example was done by Florence Nightingale, who created a visualization showing that during the Crimean War, surprisingly, most soldiers were not dying to run from the battlefield, but rather from preventable diseases. Through this work, he was able to argue that improving sanitary conditions could go a long way towards saving lives. As we have seen, there are many diverse steps involved in performing data analysis and creating data visualizations. Furthermore, each of these steps also requires different skill types. Ultimately, this is about storytelling and about using techniques that allow for us to communicate complex data to the public. In a later lecture that I will give, you'll be able to see actual examples of how this works, and you can directly see this through the link in the footnotes on this slide." "Open Data Course: Open Data for Pubic Policy Making","https://www.youtube.com/watch?v=lNEJBGFJb4E","Hello, welcome back. Open government data can also be used in policymaking processes. But how exactly can officials use this data to formulate and improve governmental policies? And how can citizens and other stakeholders contribute to this process? In this video, we will be exploring these issues. After this video, you should be able to describe the current steps of open government data publication and use. Describe how this relates to improving public policies and describe the main roles that are involved. So let's start with the current steps of open government data publication and use. From a high level, the current open data process can be divided into four basic steps. First, the data are created. Government organizations and publicly funded research organizations produce, collect and integrate large amounts of data each day. They collect this data to be able to fulfill their ordinary tests. For instance, the Ministry of Justice collects data about the number of crime victims in order to create its crime prevention policies. The production of this data is funded by Polygmani. Second, public agencies and publicly funded research organizations decides whether they will open their data on the internet. We often refer to the term data publication when data is opened. Data can be published on the website of a government organization, on a national portal, on other portals, or on different combinations of relevant portals. Governmental data is published on the internet increasingly and it is then referred to as open data. Third, potential data users can find this data by searching open data portals. This can be done manually, however, nowadays this is also often done automatically by machines. For instance, application programming interfaces or APIs can be used for this purpose. Since open government data is provided through a large variety of portals, finding the data that someone is looking for can be challenging. Especially if he or she does not know whether data exists and which government organization creates or collects the data. Users may be looking for the needle in the haystack. Fourth, when open government data is found, this can subsequently be used. Often, the data user needs to download the data to be able to work with it. Open government data can be used in many different ways. For instance, by cleansing, analyzing, visualizing, enriching and combining and thinking the data. Data cleansing refers to detecting and correcting records in a data set. Data cleansing could be a goal in itself, but is often performed to make it easier to use the data sets in another way, for instance by analyzing it. Analyzing a data set could merely mean reading that means looking at data and deriving useful information from this activity. But it could also refer to conducting a thorough statistical analysis by using software such as SPSS statistics. An analysis of a data set could lead to new insights and understanding of the data. Possibly by analyzing data in a way that was not done before. Visualizations also often provide much insight in a data set. And data sets can also be enriched in several ways. For instance, a user could annotate the data set by describing his or her experience of using the data, or by noting which information other users should take into account when using the data. A data set can also be enriched by adding information that was derived from statistical analysis or visualization. Another important way of using open data is by combining data with other data sets, or by linking them to other data. As this reveals relationships and correlations between data sets. And data interpretation is very important for each of these steps of open data use. For instance, in order to analyze or combine open data sets, the user needs to be able to interpret the data and to understand the context in which it has been created. In some, we just explored the first four steps of the process of using open data for public policy making, namely creating data, publishing data, finding data and using data. After open government data have been used, in practice the process of some stops. Two often, the focus is on publishing, but not on learning from usage, which in turn can result in improvements of the publishing process. If we want to use open government data to formulate and improve public policies, there need to be four more steps. As a fifth step, feedback loop must be generated. Open data users can provide feedback on data sets, such as feedback regarding missing values or data quality problems. Moreover, data users can provide feedback based on outcomes of the data use. For instance, a researcher can use open government data to answer how taxation affects welfare. This type of data use might help answer research questions and might provide new insights and conclusions. And these insights may be interesting to other open data users, but also to policy makers. Policy makers can analyze this feedback obtained through open data use, discuss it with data users, and subsequently they can learn from these new insights obtained through open data use. The analysis and discussion of policy feedback might subsequently be used to formulates or improve public policies. And this contributes to creating an open government since the government interacts with citizens and other open data users in its policymaking processes. This means that if governments want to use feedback, the rise from open data use for improving policymaking processes, then need to publish this data in a way that makes it findable and reusable. Moreover, this requires feedback mechanisms that governments can use to find out how their data has been used and what can be learned from this. This may seem to be simple. However, providing data in a findable and usable format and providing feedback mechanisms in addition to this can be complicated. For instance, governmental organizations may have collected and published data in a format that's not preferred by open data users, which can be a barrier for using the data. Another barrier may be the lack of feedback mechanisms or the complexity of analyzing this feedback by policymakers. The four going shows that if governments want to use feedback derived from open data use for improving policymaking processes, actors with a variety of roles are involved. So which main roles are involved? First, we saw that data providers are involved since they supply the governmental data to the public. For instance, this can be international, federal, regional or municipal government agencies, such as the European Commission, the federal government in the United States and the municipality of Rio de Janeiro. Moreover, open data users are an important role category. There are different types of users, including entrepreneurs, developers, citizens, but also researchers, journalists, archaifists and librarians. Civil servants themselves can also be users of governmental data. For instance, when using data provided by other governmental agencies. Furthermore, policy makers are involved. Since they can make use of the knowledge obtained through open data use and use this to formulate and improve governmental policies. So these are the key, the three key roles. The actors performing these roles are dependent on each other's activities. For instance, open data users depend on governmental data providers for obtaining the data that they're interested in. Open data providers depend on open data users to obtain feedback regarding data publication that can be used for future data supply. And policy makers depend on the users to obtain information that can be used in the development of policies. Managing these interdependencies requires coordination of the activities of open data providers, users and policy makers. The actors need to collaborate to make policy making with open data possible. In some, it can be concluded that open data might be used for policy making, yet using open data to improve public policy making is not as straightforward as it may seem and is accompanied by barriers. So here are the references related to this presentation. Thank you for your attention." "Introduction to Wind Turbines: Anatomy of Wind Turbines","https://www.youtube.com/watch?v=XDHgQFgh8Yk","Welcome to this learning unit in which you get an overview of the outside and inside of the most common types of wind turbines. On the whole, these wind turbines look very much the same. However, you'll see that there are also some important variations. First, I'll give you some terminology of the components that can be seen from the outside. As you all know, these moving parts are the blades. The blades connect to the hub and together they form the rotor. The rotor connects to the NSL, which houses the machinery. The combination of rotor and NSL is aptly called the rotor NSL assembly, which is often abbreviated to RNA. The rotor NSL assembly is supported by the tower and this rests on the foundation. The tower and foundation together are called the support structure. Sometimes you can recognize the housing of the transformer at the tower base. For an offshore wind turbine, this can be on the platform above the boat landing. Some of your turbines have the transformer at the rear below the NSL. If we open up the NSL, we can see the drive train. The drive train is the assembly of all rotating components that are involved in the energy conversion. What you see here is traditionally the most common type of drive train, the drive train with a gearbox. On the left-hand side, you see the hub that is connected to a low-speed shaft. The generator that is used in this drive train is a more or less of the shelf product and needs to rotate at a much higher speed than the rotor. Therefore, it is connected to the low-speed shaft through a gearbox. For multi-megroturbines, the gearbox increases the rotational speed by a factor of about 100. The NSL connects to the tower to the rear system, which enables the turbine to align itself with the wind. You will recognize the outside of this type of turbine as many turbines of this type have been built on land. It has the shape of a camper van, but be aware it is usually much bigger. The drive train is an elongated assembly of several medium-sized components, and therefore the NSL is relatively long, but not so high and white. The next drive train has been a minority for a long time, but is growing in popularity. The hub connects directly to the generator and there is no gearbox. This configuration is therefore called a direct drive. The direct drive is particularly gaining popularity for the offshore market. It is expected to have a higher reliability due to its lack of a gearbox, and that should lead to less downtime and lower maintenance costs. The generator needs to be much bigger because of its low rotational speed, and it is therefore much more integrated into the structure. As a consequence, this type of turbine doesn't have an identifiable low-speed shaft, with instead a large bearing to carry the rotor. As mentioned on the previous slide, the direct drive generator is very large. This can be seen on the outside because it invariably leads to NSL with rounded forms. The large diameter of the generator is clearly visible, while the NSL is shorter for the lack of a low-speed shaft and gearbox. The third configuration for the drive train is a hybrid of the previous two. It does have a gearbox, with a much smaller one, the rotational speed of the rotor is only increased by a factor of about 10, so 10 times less than in a traditional drive train. Therefore, the generator speed is higher than in the direct drive, with lower than in a fully geared system. This leads to an intermediate size for the generator. At first glance, the system seems to inherit the disadvantages of both previous concepts. It still has a gearbox that can feel, and the generator is not an off-the-shelf product. However, this type of drive train can be easily scaled to larger powers, without excessive increase in the drive train mass. For the two previous configurations, the mass of either the gearbox or the generator would increase very much with such scaling. The hybrid drive train has a more equal distribution of its volume, over width, height and length, and is therefore very compact. Although it does have potential for large offshore wind turbines, there aren't many around at the moment, so you won't commonly spot them in the field. The last drive train configuration that I show here is the exception to the rule that all components are aligned sequentially. Here you see four generators that are connected in parallel to the four outgoing shafts of the gearbox. Each generator has a quarter of the power rating of the turbine, in case one of the generator fails, the turbine can continue operation with a slightly reduced performance. I show this drive train to you to make you aware that still new ideas keep popping up, and all ideas are listed off. These ideas, especially with particular demands for offshore turbines. Everyone's in a while, such an idea gets taken a step further than just the drawing board, and we may therefore see more changes in the future. In this video, you've seen several configurations of drive trains. As you have seen, these drive trains have many components in common within a different configuration. When we are treating the individual components in the next learning unit, you'll be able to visualize how they fit in these configurations. I hope you enjoyed this topic. Thank you very much for your attention." "Introduction to Wind Turbines: The Origin of Wind","https://www.youtube.com/watch?v=3m6SmqeCeuo","As you all know, the Sun is the ultimate origin of Wind. In this video, you'll see how heating of the Earth's surface leads to wind and which patterns the wind will consequently follow. Wind is created through temperature differences on the Earth's surface. Here you see a map of the mean surface temperature in January. Obviously, the temperatures are higher around the equator than around the poles, and this is driving the global patterns of wind. If you look at the Earth's atmosphere from the side, you can see how the temperature variation over the Earth's surface leads to circulation cells. The roundy equator, warm air rises, and higher up in the atmosphere, this air has to make place for new rising air and therefore it moves either north or south. Eventually, it will hit colder air coming from the direction of the poles. At this point, the two air flows sink back to the Earth's surface. A similar bit of posing circulation can be seen at the poles. The pole air at the north pole sinks and has to move to the south. There, it encounters air coming from the south, and these two flows have to rise. Because of the thickness of the atmosphere, there is a room for free of those cells between the equator and each pole. At points where the air rises, it leads to a low pressure region in the lower atmosphere. The high pressure region is already air sinks, the lower atmosphere becomes a high pressure region. Because of the size of the circulation cells, the north west of Europe typically experiences low pressure regions. This typical latitude of high and low pressure regions is visible on this map of a particular day. The high pressure region is around the Mediterranean, and the low pressure region is around Scandinavia. The white lines represent isobars or lines of equal pressure. The wind moves along those lines. In first instance, you would actually expect the winds to go directly from high pressure regions to low pressure regions. It will see later how the earth rotation causes the winds to deviate from this direct route and follow the isobars. Because the low pressure regions are in the north and the high pressure regions are in the south, we will see that this causes patterns with mainly restally winds in north west Europe. Let's have a look at this rotating disc and a ball that is moving along a straight line in a fixed frame of reference. If you would look at the motion of this ball from a frame of reference that is fixed to the disc, it would appear to be curved to the right. You can see this in the lower part of the image. The same happens if you would have wind moving in a straight line from the north ball to the equator. Looking at the earth from the top so at the north ball, the earth rotation is counterclockwise. While the wind is moving from the pole to the equator, the earth underneath it therefore moves to the right. Looking at the wind from a frame of reference fixed to the earth, the wind would then appear to curve to the right. The fictitious force that causes the curvature is called the Corioli force. If we were to look at the Corioli force in more detail, we would find the law of bias below. This law states that on the northern hemisphere wind turns clockwise around high pressure regions and counterclockwise around low pressure regions. This explains our pattern of restally winds in north-west Europe. The upper two circles show the patterns of wind around high and low pressure regions on the northern hemisphere. Imagine the low pressure region on the right to be north of the high pressure region. The wind will then move from left to right between the two pressure regions so in restally direction. So far, we have looked at the global patterns in temperature variation over the earth's surface leading to large patterns in the wind. Now we're going to look at some local temperature differences leading to local patterns in the wind. If we have a coastal region with land on one side and sea on the other side, we also get temperature differences. During the day, the land heats up faster than the ocean which has quite a constant temperature during the season. This means that the air rises over land leading to a low pressure region and it drops over sea leading to a high pressure region. This causes a sea breeze coming from the sea to the land. At night, the earth surface cools down while the sea keeps the same temperature and we get a reversal in the pattern. You can recognize this pattern in the Netherlands. The sea breeze increases the predominantly westally wind during the day and at night the wind drops due to the reverse flow. A similar effect can be found in mountainous regions. If principle, the higher you go to colder it becomes. However, during the day, the surface of the mountain peaks will heated faster than the surface of the valleys. This causes air to rise against the mountain slopes and to drop in the valleys. At night, the mountain slopes will cool down quicker than the valleys and also hear the pattern reverses. With this last example of local creation of wind from solar energy, we will close off this overview of the original wind." "Introduction to Wind Turbines: System Behavior","https://www.youtube.com/watch?v=1CexTf9LMq4","To address the overall behavior of the drive train, we'll go back to the overview that was provided at the beginning. We have seen how aerodynamic power is converted by the rotor and transmitted as mechanical power through the low-speed shaft, the gearbox and the high-speed shaft to the generator, where it is converted into electrical power. For the analysis of the behavior of the drive train, we will focus on the low-speed shaft and on the high-speed shaft, for which the power is determined by the rotational speed and torque. Their inputs come from the rotor and generator respectively, and they connect through the gearbox. Therefore, we'll also look at the efficiency of the gearbox and how the gearbox changes speed and torque. As you have seen before, the mechanical power can be expressed as rotational speed times torque. When we relate the power in the high-speed shaft to the power in the low-speed shaft, we have to consider the efficiency of the gearbox. In the next step, we substitute the expressions for power to get the relation between torque and speed in both shafts. Furthermore, the rotational speeds are related through the transmission ratio of the gearbox. The efficiency of the gearbox has no effect on this expression, since it is a purely geometrical relation. Substituting this expression for the rotational speed in the energy balance leads to a relation between the torque and the high-speed shaft and in the low-speed shaft. This expression shows that the efficiency directly affects the torque in the high-speed shaft. This should not come as a surprise. The losses in the gearbox are caused by friction, which leads to a reduction in torque on the outgoing shaft. Here you see a recap of the torque speed characteristics of the different components. The collecting losses in the main bearings, the aerodynamic CQ-long occur, can be directly translated to the speed torque curves in the low-speed shaft. Similarly, the speed characteristics of the generator can be directly translated to the speed torque characteristic in the high-speed shaft. However, the speed and torque levels in the two shafts differ several orders of magnitude due to the separation by the gearbox. Therefore, we cannot directly judge from them how the system is going to behave. The next slide will show how the connection of speed and torque through the gearbox properties can help with this. The torque speed characteristics in the low-speed shaft and high-speed shaft are repeated here. For the next step, it is good to realize what these characteristics actually mean. Let's first look at the low-speed shaft. These characteristics were based on the aerodynamic properties of the rotor, so it represents the behavior in the low-speed shaft when looking into the direction of the rotor. As a thought experiment, disconnect the low-speed shaft from the gearbox and instead connect it to a testing machine. This testing machine can be set at any rotational speed. If the wind speed is blowing at 10 meters per second and the testing machine would gradually increase the rotational speed in the low-speed shaft, the torque measured by the testing machine would follow the blue curve from left to right. Now we'll look at the high-speed shaft. In our thought experiment here, we disconnect the high-speed shaft from the gearbox and connect it to the testing machine. Let the testing machine apply a torque on the high-speed shaft and measure the rotational speed. It will be clear that the machine will measure the speed as it is set by the electrical frequency. Finally, we get to the crucial step. In the high-speed shaft, we have looked into the direction of the generator. With what are the torque speed characteristics, if we look from the high-speed shaft into the direction of the rotor? In other words, what if we disconnect the high-speed shaft from the generator and connect it to the testing machine there? We can achieve these characteristics by using the gearbox properties. These tell us what happens to the torque and speed from the low-speed shaft when they are transferred to the high-speed shaft. Using these relations, we can translate the torque speed curve of the low-speed shaft to its equivalent in the high-speed shaft as shown here. Of course, they look similar in shape, but the scale in both X and Y-axis have been changed. Now that we know the characteristics in the high-speed shaft, both looking in the direction of the rotor and in the direction of the generator, we can determine from the combined graphs at which rotational speed and torque the system is going to settle. Remember that the blue curve for 10 meters per second wind speed was obtained by replacing a generator by a testing machine. Now that the generator is back, the generator takes the role of the testing machine. It sets the generator curve somewhere depending on the electrical frequency. The consequential torque in the high-speed shaft is where this curve crosses the blue curve. This is the point where the generator torque and rotor torque reach equilibrium in the high-speed shaft. If this generator was connected to the 50 Hz of the grid, without back to back converter, the operational point would always fall on this vertical line. For different wind speeds, it would intersect at different heights with the relevant rotor curve. However, with the back to back converter, the generator can also be controlled differently. It is also possible to set the torque in a generator and adjust the electrical frequency according to the demand. In this case, the operational point is found at the intersection with the horizontal line and the demand that electrical frequency follows from the resulting rotational speed. This analysis shows that it is not the wind or the rotor air dynamics that determine the speed of the rotor. They do play a crucial role through the CQ Lambda curve, but it is the control of the generator that is decisive." "TU Delft 10 years OpenCourseWare anniversary","https://www.youtube.com/watch?v=wYSbY58RjPE","Well, 10 years ago we started OpenCourseWare as TU Delfts and that was the moment that we decided to share our education with the world. OpenCourseWare has been a catalyst to many other types of open as well. For example, open science and open data. So I think that is important. But perhaps even more important is that with OpenCourseWare we started sharing our courses, our education worldwide, which increases access to higher education. It has helped us to increase the quality of education. If you only look at the numbers, 10 years ago we started with seven courses and now we have more than 200 courses in OpenCourseWare. More than a thousand lectures online and 1.6 million learners who have accessed our materials. Those are great results. To give you just one example, our courses are published under an open license, which means that teachers worldwide can use and reuse our materials. And we noticed that there was this Vietnamese non-profit organization, which is called Kianhawk. Because they knew that a lot of the students in their country did not read or understand English, they translated our course materials into Vietnamese. I think being active in OpenCourseWare and in the open educational movement has helped us a lot also with regard to our international position. We are known as one of the world leading institutions in this area. We joined EdEx as the first European member. And also we noticed that there are a lot of international PhD candidates and international students who know to you dealt throughout Open Educational Materials. In 2018, we will host the Open Education Conference here on the campus at TU Delft. But that's the immediate future. There are other issues which are important as well. For example, Open Education and Open Science Open Data are an essential part of our new strategic plan. And that means that we will be active in this field in the future as well. What we have already started is the virtual Exchange Program, which is a program that we set up with the university's worldwide. And through this virtual Exchange Program, we enable our students, the students of the participating universities to take MOOCs or other online courses for credit in their own regular programs. And that's already a really good step as well. So I'm very proud of the result so far. But like I mentioned, this has only been the very first start and I'm really curious to finding out what the future holds for us." "Globally Distributed Software Engineering","https://www.youtube.com/watch?v=gyrkIqK6A0s","Hi, my name is Reneville Solingen and I'm a professor here at Delft University of Technology in Global Software Engineering. We've been working on a MOOC, a massive open online course on global distributed software engineering. And it deals with everything with practices, problems, solutions. I have people from industry, we have software engineers and we even have some international researchers, researchers and gurus who actually are talking about global software engineering. Yeah, I'm Jeff Sutherland and I've worked with Rene writing a book called The Power of Scrum, Great Book You Auto Read It, and you ought to join up and sign up for global distributed software engineering. So, interesting globally distributed software engineering. Want to join the MOOC, please and roll. And see you soon." "Forensic Engineering: Learning from Failures - Tree House of Failures","https://www.youtube.com/watch?v=yoQuSR_pNVQ","What if you have to contact the forensic engineering investigation and someone asks you that cringey question, are you sure you have considered all possible causes? Now, did you? And how can we ever be sure to have considered all possible explanations of something that went wrong with a complex technical system? And can we always be sure we did not miss the cause we were looking for? Actually, perhaps we can't. But if we use a well-defined diagram of all kinds of causes that could be related to the failure of technical systems, we could at least use this as kind of checklist to see which ones are likely to have occurred. But what should we use as a diagram? Well, we're going to use a three-house for that and I hope this one was developed and produced sturdy enough to allow me to get down safely again. Consider the proper functioning of any technical system to be a three-house. The three-house is supported by foundation, consisting of carriers, which are beams that spend horizontally to carry the floor. Each carrier is supported by a set of three stems that all grow on a bunch of roots hidden underground. The three-house will collapse if the foundation caves in, when anything in a foundation system breaks. As a simple drawing, it would look like this. Three carriers resting on the stems, which are firmly held upright by their roots. Now, from this drawing, we go to the three-house of failure diagram. This diagram describes the foundation of the proper functioning of a technical system, such as a three-house." "Railway Engineering: An Integral Approach – Course Introduction","https://www.youtube.com/watch?v=qXW4eXT4ydA","Look at our universe. An amazing assortment of incredible object, isn't it? An enormous system in which everything is connected. You could call it a miracle. Take this guy. He overslept my 35 minutes, ran 531 steps, while drinking 68 milliliters of coffee, with only 47 seconds left to catch his train. He's almost a miracle that he made it on time. Little does he know that the train approaching him is anything but a miracle. It's on time, too. Thanks to the commitment of people who made the switch from thinking of miracles to making them happen. Creating a sophisticated, safe real system, that's been operating since the 1830s, following the same principle. Tracks, wheels, motion. And that story continues today, connecting cities, countries, continents and people all over the world. Imagine the importance of a proper wheel rail interface, the value of solid catanaries and pedographs, the wear and tear of the material, and the significance of proper maintenance. It's engineering that keeps things going, helping us to stay safe. With bright heads connecting these dots, connecting us. So we can find our ways to work, our friends, our homes, or completely new destinations. Without noticing, it's all part of one of the largest, most innovative systems in the world, where everything is connected, like in our universe. And the best part is, it isn't a miracle, it's engineering. You never realize that, did you? TU Delft offers you the chance to get acquainted with the complex challenges of railway engineering and operations during an exciting MOOC. Join us and get connected to." "Nanofiltration and Reverse Osmosis in Water Treatment - Course Introduction","https://www.youtube.com/watch?v=0a3UgIAPzBo","Everyone working in or associated with the water treatment sector needs to keep well up to date. Our online course, Nano-Filtration and Reverse Osmosis in Water Treatment has been designed for working professionals who want to increase their knowledge of these technologies and their applications. Membrane filtration using Reverse Osmosis has universal applications and an increasing number of water related organizations are using it to ensure a supply of clean water for industrial use and drinking water. Learning about how these technologies work will enable you to better operate your own installations and make better decisions about investment and maintenance. In this seven week course, you will engage in practical applications and gain hands-on experience on water treatment with Reverse Osmosis. Because of their safe mechanism, the membrane retains bigger molecules, mass-mala molecules, will pass a membrane. Secondly, electrostatic interactions in forensic retention. Once you have mastered our essential calculations required for Reverse Osmosis, you will have the opportunity to design your own installation. You will use our online virtual 3D lab to observe how a small Reverse Osmosis unites operates. You will gather data, perform calculations and draw conclusions about concentration polarization. After this online experiment, you will master this difficult subject. Through several virtual excursions, you will also experience the real-life operation of an industrial water treatment plant, sea water and brackish water installations. Available 24 hours a day, our online course will give you access to course materials, discussion forums, online collaboration and peer feedback anytime you want. At the time and place that suits you, if you are interested in the fascinating world of membrane technology, and you want to commend Sorton Houns, your career in this field, and roll today." "Closure Works Course: Schelphoek case","https://www.youtube.com/watch?v=eLLJgwoxwBM","In 1953, we were confronted in a very huit stond. The strong winds pushed the water inside the southern part of the North Sea, and raised the water level here in the Oster Schellen. The water level went up. In the Netherlands, heavy stond is always come from the North West. And this location is means it came from that direction. That means there were no waves at this area. Normally there were no big waves, you stond, you're at the dikes here. So therefore the dikes were designed in a situation that they could cope with high water levels, but not with high waves. In 1953, the water level was much higher than the design water level. So therefore the dikes was overtapped and a breach occurred. You see behind me still the area where the dikes was breached. Because it became very deep over there, it was not possible to close the dikes and that location, and a new circular dike was built. At this moment, at the location, where this dike was closed. At that place, a new case, and was put into the water. Schellenvoek is located along the Oster Schellen Stree in the soundstwest of the Netherlands. In 1953, this location was open to the sea. The main wind direction for stormwinds is from the north west, so that is usually not all of the wave action on this dike. This explains why there is usually no much wave action in front of the dike. This picture shows the situation at this moment. The location of the breach is at location A. You can still see the remains of the old dike. For the closure, a new semi-circular dike has been made. The final closure was done as a case in. This is visible at location C. The artificial island B has not been to do with this closure. It has recently been constructed to increase the ecological value of the area. In the semi-circular dike, there was one remaining gap. For this closure, the use of a case in which was left over from the allied landings in Normandy. It was a box case in which all is intended to be used as a harbour case in the break-wall case. So it has no loose function. Therefore, it has to be placed in one operation and directly close the whole thing. That worked in this case because they had a lot of preparation, a lot of that protection. Behind me, you see the remains of this case in. You see only the top because it's some 10 meters high. The rest is inside the dike at this moment. In the time, when this was still open, so there was a couple of months. There was quite a lot of current in this area. This current is scoured out a number of channels. And the remains of this channel, you can still see at the owner area over there. Where there is still a water area which is in fact the remainder of this tidal gully, which occurs during the few months when it was open. In the right picture, you see the final closing case in and that it in a new dike. The case in use for this closing was a left over case in from the Allied landings at Aromansch Normandy in 1944. These case were constructed in UK and towed over to France to create a temporary harbor. In 1953, we could buy a few left over case for closing the breaches in our dikes. Because these case were originally intended as break-out requirements, they are closed boxes and no schloos caseants. On this very important difference, we will come back later in this course. A comparison of the present situation with the pre- 1953 situation. Both pictures are on the same scale. On the 1950 map, you see a small gap in the main dike. Behind that gap was the tiny harbor of schelpook, surrounded by relatively low dikes. These dikes overtopned in 1953 and failed. This slide showed a situation one week after breaching. You can clearly see the location of the breach. It has a width of some 200 meters, and is not very deep. Behind the breach, some erosion already took place. Realize that the part north of the broken dike was completely indated. The height of this land is approximately 1.2 meters below mean sea level. Normal low water is here approximately 1.5 meter below mean sea level. This means that during most of the time, all the land was covered with water, with a depth of between nearly zero and three meters, depending on the tide. And the enormous mass of water was flowing in and out to breach every tide. Because of this strong flow already after three weeks, the size of the gap has increased significantly. But also the depth has increased a lot. On the 8th of April, the depth in the breach was already 20 meters. And the gap, this has increased to 300 meters. In the next three weeks, the maximum depth remains more or less the same, with the deep part increased in size. Also quite some land was eroded, and tidal creeks were formed. This breach was only one of the 70 large breaches with occurred during the storm. And it was also one of the largest. Therefore it took some time before detailed plans could be made for the closure. In May, the creeks had then large again, and it was clear that closing a gap along the original trajectory was completely impossible. It was already too deep, and flow velocities were too high. Therefore it was decided to make a new diagram, approximately one kilometer more inland. The first task is then always to prevent that this selected trajectory for the closure will erode more. So when us to start with placing a bad protection along this line, in May already a smart part was protected, indicated with the word bezinking. This bad protection consists of fassing mattresses with a thin layer of heavy stones. One month later, the western bed protection was already completed, and the small part of the new closure down on the west side was also ready. In the middle, a small island has been constructed to act as a working base for the final closure. In August, the bed protection was completed, and also the middle damp section was ready. And of August, the western part took closure down on the shallow part, and the bed protection was completed. In this section, the damp was made using prefabricated small concrete casings. In this picture, the flow over the sill can be seen. A part of the concrete casings hem has been constructed. The floating crane is placing the concrete boxes on top of each other. This picture gives a good view of the flow pattern in the area, because the flow area over the sill is long, the velocities on the sill are relatively low, and also turbulence is limited. This prevents extra scour. In this picture, the sea is on the left, and the innuidated land is on the right. This picture gives an overview of the whole area, notes that the part from the new dieke on sees only water in this picture. But most of the water is shallow, so one cannot easily sail with large vessels in this area. The final gap was closed with large casings. This was the Phoenix casings from the Allied landings. It is a closed box, and has therefore to be placed in one operation during slack water, kneepe dieat. After completion of the closure of the gap by placing the casings, a new dieke was built at the location of the closing gap. The case is filled largely is sand after placing. Because of that, the casings need to be completely buried in the soil, and at this moment only the top of the casings can be seen. The casings are originally accessible by holes in the top. For safety reasons, these holes are now completely covered by concrete plates." "Offshore Wind Farm Technology - Course Introduction","https://www.youtube.com/watch?v=Z0Gf7daUOQU","Since the first offshore wind farm commissioned in 1991 in Denmark, scientists and engineers have adapted and improved the technology of wind energy to offshore conditions. This is a rapidly evolving field with installation of increasingly larger wind turbines in deeper waters. At sea, the challenges are indeed numerous, with combined wind and wave loads, reduced accessibility and uncertain-solid conditions. My name is Axel Vire, I'm an assistant professor in Wind Energy at U-Delf and specializing in offshore wind energy. This course will touch upon the critical aspect of wind energy, how to integrate the various engineering disciplines involved in offshore wind energy. Each week we will focus on a particular discipline and use it to design and operate a wind farm. For example, we look at how to characterize the wind and wave conditions at a given location. How to best place the wind turbines in a farm and also how to retrieve the electricity back to shore. We look at the main design drivers for offshore wind turbines and their components. We'll see how these aspects influence one another and the best choices to reduce the cost of energy. This course is organized by the two-delfd wind energy institute, an interfaculty research organization focusing specifically on wind energy. You will therefore benefit from the expertise of the lecturers in three different faculties of the university. Aerospace engineering, civil engineering and electrical engineering. Hi, my name is Ricardo Pareda. I'm a researcher and lecturer at the Wind Energy and Economics Department and I will be your moderator throughout this course. That means I will answer any questions you may have. I'll strengthen the interactions between the participants and also I'll get you in touch with the lecturers when needed. The course is mainly developed for professionals in the field of offshore wind energy. We want to broaden their knowledge of the relevant technical disciplines and their integration. Professionals with a scientific background who are new to the field of offshore wind energy will benefit from a high-level insight into the engineering aspects of wind energy. Overall, the course will help you make the right choices during the development and operation of offshore wind farms. Designed wind turbines that better withstand wind, wave and current loads. Identify great integration strategies for offshore wind turbines and gain understanding of the operational and maintenance of offshore wind turbines and farms. We also hope that you will benefit from the course and from interaction with other learners who share your interest in wind energy. And therefore we look forward to meeting you online." "Types of Investigations - Air Safety Investigation Online Course","https://www.youtube.com/watch?v=ToG65LQetAY","An aircraft accident is first and foremost a tragedy. From the moment such a tragedy occurs, the clock starts ticking as people start looking for answers. This can lead to speculation, frustration, and potentially a lot of misinformation and misunderstanding regarding the events that surround the tragedy. The past has shown us that the most effective way to get to the answers is through an investigation. However there is a complication. If we start to look at all those different people that are directly or indirectly affected by an accident, it appears on the surface that they all have the same motive. They all want to get to the bottom of what happened. Beneath the surface though, each of these parties has different underlying motives for wanting to find out what happened. Furthermore, some of these underlying motives conflict with each other and may influence how the parties behave in an investigation. But this reason, rather than one single investigation, two different investigations are conducted. A judicial investigation looks at what happened from the motive of a portioning blame or liability. It seeks to look back and see who is at fault. An air safety investigation looks at what happened from the motive of identifying safety hazards and preventing reoccurrences. It seeks to look forward and improve future air safety. There is a separation between these two investigations. An internationally agreed upon set of rules to find the terms of this separation, known as anux-13 to the convention of international civil aviation, air accident and incident investigation, or simply anux-13 for short. Both investigations get access to the same factual data such as access to the wreckage and witnesses, but each investigation needs to conduct their own analysis. In this way, it is easier for parties to participate in the analysis and interpretation of data for the safety investigation without the risk of incriminating themselves." "Advanced Leadership for Engineers - Leading Organizations","https://www.youtube.com/watch?v=lPYKk0_age8","As engineer, you are probably used to being the expert. However, as a leader, you will often lack the knowledge to make every decision in your organization. Indeed, your support in it may have more knowledge on certain topics than you. This is called Information A Symmetry. So, how should you cope with this? To make our organization resilient to cyber attacks, I've tried to use a bottom-up approach. But now that regulations are changing and risks are increasing, I might need to use a more direct way of leading my team. In this situation, how can I continue to make use of all the knowledge within my organization? Developing strategies to deal with information asymmetry in organizations is key to developing yourself as a leader. Information asymmetry is one of the concepts that I cover in the professional education course at fast leadership for engineers leading teams, organizations and networks. Join me and fellow engineers pursuing and holding leadership positions. And roll now." "Advanced Leadership for Engineers - Leading Networks","https://www.youtube.com/watch?v=TXQx2HWYjbQ","As engineer, you have degraded the fan that you are being equipped with an analytical mindset. You are comfortable with figures and you understand technology. But how would you, as engineer in a leadership position, manage a large engineering project with many actors with competing interests, for instance, when developing an offshore wind park? My position demands that are act within tight time constraints, but I have the feeling that when I force the decision to build this wind park, I end up with an ugly compromise or the park won't be build at all. How do I build trust among our partners without losing credit on a project? In a highly dynamic and uncertain environment, you can benefit from practical skills that complement your engineering background. In our professional education course at Fonse Leadership Engineers, we teach you these skills using challenging case studies and interactive assignments. During fellow engineers, pursuing and holding leadership positions, subscribe now for the course at Fonse Leadership Engineers, leading teams, organizations and networks. Thank you for watching." "Online Courses at TU Delft","https://www.youtube.com/watch?v=84YQyi1Atx8","The online courses from TUDEL allow working professionals from anywhere in the world to advance their career in a flexible way. Our courses are 100% online. Give you 24-7 access to course material and allow you to interact with your lecturer and fellow learner easily. Courses are typically divided into learning units. Each unit represents a learning activity through which you master a certain topic. You will gain access to a variety of resources. Video lectures, articles, quizzes, and assignments based on real-world problems to help you apply what you learned in your workplace. The discussion forums reinforce your learning process. They include discussions of the course content, allow you to ask your own questions, and learn from the experience of your fellow learners. Need help in staying on track and planning your time? You will be regularly informed about deadlines, new course materials, and all the other information you need. The progress possible give you an overview of your progress as you complete the assignments and shows your grades. Occasionally, you will also have the opportunity to engage in life sessions with the lecturer and other learners to discuss the topics covered. Our platform offers you full-control flexibility to decide what, when, and how you learn, in order to make the most of your studies. Take a look at some of what's on offer for you." "Rotor and Wake Aerodynamics - Course Introduction","https://www.youtube.com/watch?v=g5DIchTrtds","My name is Karol Shimofrener. I'm an associate professor at the Faculty of Aerospace Engineering of the Tuleleft. And I teach and research winter by narrow dynamics. New scores will explore the aerodynamics of rotor and weight. The focus of this course is of modeling various aerodynamics designs of rotors in the helicopters and winter binds the exam How they behave in different operational modes. During the course students will include and implement their own models. They will learn what are the weaknesses and strengths of these models and how to use them to design a rotor. We'll cover many topics including rotary ring aerodynamics with applications that aircraft propulsion, fans and winter binds. We'll discuss conservation laws, act with this mid-in theory in the limitations of these models. We'll see how helicopters fly, most in vertical flight and fourth flight. We'll learn how to use and vortex flying methods to model a full rotor. We understand vertebrax winter binds and the voids shine a propeller. We'll learn about unsidere dynamics in the nomenstoly effects. We'll see and understand wind-farmer dynamics. We'll explore aer acoustics and see how noise is generated and propagates from a rotor. I think this course will be very useful for engineers that have a background in fluid mechanics or aerodynamics. And they would like to understand how to design a rotor, either propeller or wind turbine or an helicopter. So, if you're up for the challenge, join us. Thank you." "Spacecraft Technology - Course Introduction","https://www.youtube.com/watch?v=_kM7pSYY79A","Modern satellites are much like your latest smartphone, but unlike your smartphone, the environment is harsh, and there is no person available to recharge it, move it, or rotate it to shoot a nice picture. If you ever wonder what kind of technology makes a capable and reliable satellite, this is where spacecraft technology will open the black box for you. In this course, you will be introduced to the latest industry-level knowledge about spacecraft systems, from the brain and the varying parts up to power generation and propulsion. The course is organized in six landing units, each with an ideal duration of one week, and a cube set workshop, a group assignment running in parallel to the other landing units. This course, Suits professionals, where recently entered in the Space Industry, but don't have a specific background in space engineering, and would like to improve their technical and scientific knowledge of the field. In the first half of the course, we will dive into technologies of bus subsystems. Our introduced you to the function of each hub system first, and then dive into the technologies being used. We will focus on three subsystems, Command and Data handling, electrical power, and attitude to a management control. To get a true understanding of important design aspects and different configurations, we will explore some of the lower-level technologies at component level. To complete the picture of a typical spacecraft bus, we will also briefly deal with structures, thermal control, navigation, and radio communication. However, no one of these functions can be completely exploited without a proper propulsion system. In the second half of the course, we will identify the propulsion options available to move a spacecraft to win it is in orbit, or to actually send it to its final orbit. You will first take a closer look at liquid and solid propellant engines. Then you will learn about the basics of electric propulsion and the different types of electric propulsion systems. Finally, we will go small, and discuss the challenges and opportunities of miniaturized propulsion. It's fundamental differences with conventional propulsion and the main micro propulsion concepts available today. In the CubeSatwork Shop, your task will be to design a CubeSat concept with your fellow group members, starting from given mission objective and requirements. Under our guidance, you will produce technical budgets, a system-architecture, and by means of a subsystem and component trade-offs, a feasible conceptual design of the CubeSat. With this workshop, you will learn how to relay the mission with the spacecraft technology and how the technology of different subsystems interact with each other. Are you ready to dive with us into the exciting world of spacecraft technology? We are waiting for you." "Helicopter Performance, Stability and Control - Course Introduction","https://www.youtube.com/watch?v=2r8WSnF1ee8","My name is Mariana Pavel and I work at the faculty of Aerospace Engineer. I research, teach, and deal with rotographed live-flight mechanics, design, and control. Designing a helicopter is a trade-off between stability and monovrability between slow and fast. This, you will learn in the course of helicopter performance stability and control a TU-delfd. We will learn the aerodynamics of the helicopter. We will learn performance and what can they achieve. What is the dynamics of the helicopter and here we will concentrate mostly onto the flopping motion. How to control the helicopter? But mostly you will learn to build a simulation model and to fly it with a pilot model. If you can fly this helicopter simulation, you can build simulation model for any dynamic system. This course is interesting for engineers involved in rotograph design, performance, modeling and control of rotographed. I hope you will enjoy the course." "Advanced Dynamics - Course Introduction","https://www.youtube.com/watch?v=raNLsizqguM","Dynamics is a branch of mechanics that deals with physical phenomena of a body or body's emotion and how forces can be related to motion. Advanced dynamics is about modeling complex dynamical systems and assessing how their equation of motions can be derived. My name is Mariana Pavel and I work at the Faculty of Arts and Engineering Adult University. I use advanced dynamics in my work to represent the equations of motion of complex systems. One can derive the dynamics of rigid body either using Newton laws or Lagrangian mechanics. I will use in my course Lagrangian mechanics and I will teach you how to derive consistently equations of motions of dynamic systems. This course will cover topics such as momentum and angular momentum, kinetic energy, potential energy, gyroscopic motion, Coriolis forces, those are the forces that act on bodies which are in motion relative to rotating systems. The course will be useful for aerospace engineers, mechanical and civil engineers, biomechanical engineers, or students of whom understanding the theory of dynamics is fundamental in their work." "TU Delft Online Learning","https://www.youtube.com/watch?v=ja7zkkwXq0Y","For the last 170 years, the Technical University of Delphs Record of Research, Innovation and Teaching, has established it as one of the world's foremost universities. Especially in the fields of science, design and engineering. TU Delphs Focus is on making a difference in the world today. Our world-class academic staff deploy our outstanding facilities to invent and develop new technologies. Now, with TU Delphs Online Learning, this is open to all. Game access to the highest caliber faculty, engaged with innovative learning materials, and enjoy being part of a stimulating and supportive online community of learning. Hundreds of thousands of students from all around the world have already profited from our online courses. With benefits for themselves, their organizations and their communities. Whether you're looking to change career, enhance your knowledge in skillspace, or seek further academic qualifications, the right course is here. TU Delphs Online Learning means active learning. You will use state-of-the-art learning techniques with simulations, interactive exercises, online experiments, and collaborative group projects. Advanced Theory and Research combined with excellent partnerships in the industrial and business communities to ensure that what you learn can be applied in the real world. Course content is challenging and demanding, promoting your personal growth and professional development. At TU Delph, there's a truly international learning environment, where you can collaborate, share knowledge, and network with learners around the world. Join the TU Delph community. Start learning online. Today." "The Use of Loss Given Default (LGD) - Deloitte","https://www.youtube.com/watch?v=oCxIkNWq6cg","Hi, welcome to the third session of Voices from the Field. I'm Florian Reuter, I'm a financial risk consultant at Deloitte and in this session we'll be looking into the use of loss given default. So three questions we'll be focusing on. One is where do we use loss given default? Second, how is the market interpreting LGD? And third, how is the regulator viewing the use of LGD? OK, let's start. Well, let's start with where do we use loss given default? There's a number of examples we can look at. So in this MOOC, we're concentrating more on the capital side and IRB models. So we're looking at the regulatory capital calculations. But also economic capital is an important use of LGD, which we also use in risk adjusted, written on capital calculations. Another important part is we want to use LGD in our pricing. Another part is provisioning. Lots of provisioning is in its model based where we use LGD estimates. So low loss provisioning. And I'll ask, but not least, we want to use LGD to distinguish our good and bad clients. So PD is one of the drivers we're looking at for good and bad distinction. But loss given default is also a very important component. Special asset management process also takes into account LGD estimates. So we can already see when we develop LGD models, it's an utmost importance to keep in mind what the intended use is. So let's get back to one of the examples we looked at in the previous session of voices from the field. So we looked at the future of IRB. And one of the examples we looked at was the huge variation between risk weights for default at residential mortgage loans. So in this chart we see on the left hand side we see risk weights of 300% on the right hand side we see a risk weight of zero. So there's a huge variation. So we're looking at defaulted assets. So a very obvious reason for these differences is of course the definition of defaults, differences in definition of default, different interpretations. Another driver, and that's what we're looking at in this session is LGD. So what we see in the market is there's lots of different interpretations for defaults that exposures, and we want to develop loss given default models. So a huge variation. Now let's take a step back. So we want to look at LGD. So a very simplified example. We can look at the performing exposure going to default and a default can result into a loss. So our probability of default and it's an estimate of the probability that a client goes from performing to default it. So probability of default, the default occurs, and then we want to know once a default occurs, how much loss can I expect. So the loss given default is the ratio of loss on the exposure resulting from a default. So there's already three components. One, how do we define a loss? What do we consider as a loss? How large is the exposure? I've third, what is the definition of default we're using? So these three components already drive the estimate for LGD and define what LGD we're looking at. Now close to look at LGD. We can look at performing LGDs, but also default at LGDs. So performing perspective, we want to estimate the LGD for a client that is performing. So it hasn't defaulted yet, so we only have three default information. Once a client defaulted, we have more information. We have up to date default information. We know in what stage of the default process the client is. So this is information we want to take into account. So for example, the time already in default could drive the LGD estimate. The very important also is the stage in which the default stage of the client. This all drives the loss. So we're looking at the domain of an exposure. Can we either performing or default it? We can also look at the range. So maybe we want to know the expected losses. Or maybe we want to know the unexpected losses. So there's already multiple purposes of LGD. So we already looked at the domain. We can either look at a performing where we only know three default information or we can look at a default at stage where we know post default information. From the range side, we could be interested in expected losses or unexpected losses. For example, provisioning, you're more interested in expected losses. Of capital purposes, you want to know what the unexpected loss is. Now what's regulation saying? Actually, there's only a few articles we can look at. There's only a few articles that really say something about unexpected losses. For example, so for LGD we're looking at economic downturn. But it's not clearly defined what we consider as economic downturn. For the domain side, we're looking at defaulted exposures. So is there more information? Well, there's only a few articles we can rely on. But there's not much information. This is one of the topics that was highlighted in the future of IRB. So the future of IRB already says we'll be looking much more into a defaulted assets. Though there'll be more guidance on how to model defaulted exposures and how should we consider an economic downturn in our LGD models. So the regulation, it leaves a lot of room for interpretation, but is moving towards more harmonization. Let's go into some more detail for a default process. This is a default process. It's a bit simplified, but it's very illustrated. So first, going from performing to default. This is the domain side we already looked at. So we're looking at performing going to default. So it's an LGD estimate with pre-deefault information. And the LGD estimate is for a default at start of default. Now the right hand side. The right hand side is the default process. A default process usually ends up in either litigation or no litigation. And the null litigation part is something that usually happens within the first couple of months. So roughly 50% of the defaults could result into a cure. So cure means returning to the performing state. The cure rate usually decreases rapidly. So within the first couple of months, the probability of a cure to occur goes down rapidly. Now for the litigation part. The litigation, this could mean restructuring recovery. A recovery could mean there's multiple recoveries of collateral. So there's not just, so for example, for a mortgage, you'd have just one house, one recovery, but for corporate loans, you may have multiple recoveries. So both for the cure side and for the recovery side. It could be interesting to look at survival analysis. So survival analysis is a very hot topic in LGD modeling right now. And the use of survival analysis is something we'll be looking into in the next voices of the field. So to wrap it up, LGD is used for multiple purposes. There's not just one LGD. And it's very important to keep in mind what is the intended use when you're developing an LGD model. To, the market is interpreting LGD in very different ways. And this has resulted in large variations of RWA's. Three, the regulator is working towards harmonization. This is what we're seeing in the future of IRB. More guidance is what we're expecting. Thanks for watching and see you in the next clip of Voices from the field. Goodbye." "Fatigue Mechanisms","https://www.youtube.com/watch?v=Oo2rLpbTY-8","Previously, fatigue is phenomenon has been explained with the spec to the different faces on my observed. There is the initial phase followed by a phase that represent macroscopic growth of damage. The transition between these two phases is not very strict in its definition. It cannot be quantified in the vengeance of the damage. In general, the transition is considered to take place when the microscopic damage in its growth is no longer depending on the surface conditions, but rather by the resistance of the built material. This transition may therefore be different for the different metallic materials. Now to understand the fatigue damage mechanisms in metals, various aspects are briefly discussed here in the order indicated on this slide. Microchryx initially extend a long slipbench, and the crystallographic nature of the material will initially dominate the formation of microchryx. The crystallographic aspects considered here are the type of crystallitis and the elastic ionizer trope and allotropy. The three most common crystal letters as a body-scented cubic, like ferrite materials, face-scented cubic, like aluminium copper and nickel, and hexagonal, closed packed, like for example magnesium. The material responds depends on this crystalloled at least but may still vary greatly. Take for example the elastic and isotropy which may be substantially different even for the same letters as illustrated here for aluminium and copper. Slip systems relate to the crystallographic planes but the ease of slip is greatly affected by how easy cross-slip can occur. For aluminium this is for example much easier than for nickel or copper. From one grain to another the individual grain size and orientation contribute to how easy or how difficult microchryx develop from one grain into the next. The illustrated variation in properties depending on your orientation seems to have some similarity with the different properties that fibers a matrix having composites. The initiation of matrix-tracks in composites is often greatly affected by the individual properties as well. The nucleation of microchryx may be easier at the free surface of a material because of less constraint but often microchryx nucleated inclusions in the material. In particular inclusions in the material form a micro-level stress concentration from which a microscopic crack may nucleate. This stress concentration seems similar for any inhomogeneous material. As mentioned before the difference between fiber properties a matrix may initiate a correct but also the open structure of for example wood or for its present in a material may form a nucleation site. If such micro-crack nucleates below the surface the growth of it may perceive when observed at the surface as initially fast growing. However this growth merely consists of breaking the ligament between the micro-crack and the surface. An interesting observation made in laboratory experiments is the nucleation of micro-scopically small cracks that did not propagate further to micro-scoppy lengths. This observation was mostly done for nox conditions where a high stress concentration caused the nucleation of a crack which after developing over a few grains retarded. This is illustrated in the graph on the left hand side with the non-probegating cracks. The right hand side graph illustrates the observation for this non-probegating crack particularly for high stress concentration factors. Cracks may nucleate but may also terminate after a few grain diameters. Away from the free surface the restraint on cyclic slip may alter and hence the crack may encounter some sort of threshold for crack growth. Several types of barriers may be identified. Although grain barriers don't stop crack growth and micro-crack may nucleate within a grain not be able to penetrate into neighboring grains. But also the two-phase barriers such as the per-light islands in low-carbon steel or the alpha-better interfaces in titanium alloys may form sorts micro-structural barrier. Here one has to understand that the fatigue limit represented by the lower asymptote in the fatigue life or as n-curve does not represent the limit on crack nucleation but on propagation of cracks until failure. The as n-curve represents failure lives. Below the fatigue limit cracks may nucleate but they don't grow to microscopic lengths. As these micro-structural barriers depend on the material grains structure the fatigue limit this also depends on the material. For steel for example very distinct limits are observed. Well for aluminium the limit may still slowly decrease after a lower knee point and extremely high number of fatigue cycles fill you may be observed at lowest presently to its then the fatigue limit. Another aspect that we should consider when investigating fatigue fracture services is the number of crack nuclei. Take for example the fracture service of a car axle failure. It seems to have a single point of origin as illustrated with a sketch on the right hand side. However this fracture service corresponding to a load case of reverse bending clearly shows multiple locations of origin. The sharp corner edge easily nucleate multiple micro cracks that at some point in time link up because the small fracture services are not in the identical plane link up occurs with a step which is visible as a line or a marker in the direction of the crack growth which is known as ratchet mark. In general high number of crack nuclei indicate high local amplitude stresses which may either relate to a high loading amplitude or high stress concentrations or rough and damage service for example. And looking at as anchors one may generally expect high number of crack nuclei at high amplitude stresses where near the fatigue limit in the end only a single crack may have nucleated and propagated to failure. This explains also the amount of scatter observed in experimental results. High number of nuclei indicate easier crack initiation and hence less scatter need a fatigue limit however the amount of scatter is substantial. Because the initiation phase is dominated by the service conditions the service aspects are very important for the face. Here the list illustrates that the surface conditions but also what the environment does to the surface directly influences the initiation phase. The growth phase is merely dominated by bulk material resistance and to much lesser extended environment. The data on the right-hand side supports this observation to different surface roughness conditions smooth and a core surface clearly results in distinct initiation lives while the crack growth lives are almost identical. One can also see there is a fact back in the as-end curves comparing the fatigue life curves of force smooth and rough surfaces clearly shows differences near the fatigue limit. In particular corrosion as an environmental aspect reveals this reduction in fatigue life curves near the fatigue limit as will be discussed later in this course. During the crack growth phase the crack growth is determined by the bulk material and no longer by the surface conditions. The crack propagates generally perpendicular to the print postures by a mechanism in which defamation occurs along multiple slip systems. The illustration on the right-hand side illustrates a possible mechanism. The slip systems illustrated are in the location of maximum shear and their defamation will cause the crack to increment. The slip defamation is not fully reversible due to the strain hardening and during loading and unloading little microscopic plastic riches will be formed while the crack increment. These riches are called striations which cannot be observed with the naked eye but with an electron microscope. Now the exact mechanism of crack incrementing is not fully understood and literature illustrate various concepts of crack growth. The illustration on the right gives a symmetric presentation of growth while the illustration in the center proposes an asymmetric form of crack growth. Either way the striations visible on the factor surface are important features for factor surface analysis. These striations differentiate depending on the magnitude of the load cycle. It is clearly illustrated with the factor surface on the left-hand side where a load spectrum was applied with half the 10 small cycles a larger load cycle was applied. The factor surface reveals the repetitive nature of the striations with a single large illustration related to the larger load cycle. Where the image on the left is looking at the surface from the top, looking at an angle with respect to the factor surface reveals that the factor surface is not perfectly flat. Although fatigue fractures are marked as a slightly smooth and flat, the factor surface isn't. Depending on the load sequence the formation of striations may come together with stepping up and stepping down on the factor surface. In general, striations are fairly well visible in aluminium, but very difficult to see in steals or titanium. As mentioned before the environment has effect both in the initiation and crack growth phase. In the initiation phase corrosion can be considered in two cases. Corrosion damage is created and thereafter loading is applied in a non-aggressive environment. Or an intact material is loaded while being in a corrosive environment. In both cases, damage to the surface in the end will form stress raises, which may cause nucleation of fatigue cracks. The as-encurs on the right-hand side illustrate the effect. Water and salt water as environment reduce the fatigue properties compared to air. Another aspect illustrated in this figure is the effect of frequency. If the frequency decreases, the fatigue properties will reduce as well. At low frequencies the load cycle is slowly applied, giving the environment more time to access the critical locations like for example the crack tip. Hence one has to consider the effect of the load cycle wave shape and its frequency. Low frequency and low ramp upgrade give more access to environment while short and steep ramp upgrades reduce the effect of environment. The influence of environment should not only be limited to the medium, but also to the ambient temperature. In general, an increase in temperature reduces not only mechanical properties, but also the fatigue properties of materials. Mostly this relates to material resistance, but also the increase in thermal stresses within a build-up structure may add to the magnitude of mechanical loading. At low temperatures the effect of temperature is opposite. Partly this relates to the reduced amount of water vapor in the air, which reduces reaction and diffusion rates. And the aircraft flying several hours at coosing altitude effectively implies a cabin pressure load cycle at a very low frequency, but as it flies in a non-aggressive and cold medium, it's contribution to fatigue effectively is low. Nonetheless the image of the Concord illustrates that when high speed transport aircraft are considered the aerodynamic fraction will cause heat up of the structure. Hence aluminum alloy should be considered there that have sufficient fatigue resistance at this elevated temperature. The influence of temperature is similar for all materials, including composites, increasing ambient temperature softens in the matrix, in particular near the glass transition temperature, and reducing temperature makes the matrix more stable and brittle. The Concord's Quenstool fatigue damage development is then similar to metals. Images at the bottom here are delemination shapes observed in five-medal elements when fatigue loading these elements at different temperatures. High ambient temperature yield large deleminations and faster crack growth while low temperatures result in very small deleminations and slow crack growth as illustrated here on the right. Because cyclic slip contributes to the nucleation and early propagation of micro cracks, the load case will have a substantial effect as well. Take for example the case of a bar, load an entourging and intention. Intention cyclic slip occurs on the angle of 45 degrees with the actual load, while entourging maximum shear occurs both parallel and perpendicular to the actual direction. Another difference is that the normal stress component in case of tension helps with the transition from cyclic slip to micro crack and subsequently to open that micro crack. This opening is absent in case of shear due to torsion, which hinders the formation of micro cracks at low amplitude loads. If correction ishiate they propagate perpendicular to the principal stresses which result in the case of torsion in those pyro-fractor surfaces. In general fatigue failures and metals are characterized at the macroscopic level by the absence of plasticity, the formation of growth bands depending on the load spectrum applied, the growth perpendicular to the principal stresses and the number of crack nuclei depending on the load magnitude and in case of nilthoporech nuclei the formation of radial steps or rigid marks. At the microscopic level cracks are observed to grow through the grains, while forming little plastic reaches called striations. A practical example of those striations is given here. A fatigue failure of a flat beam of a civil transport aircraft reveals striations that occur in pairs. Each time a small and large striation together, the two major loads on this flat beam relate to the opening of the flaps which are party out at takeoff and fully out while landing and the landing imposes the largest load cycle and the largest striations. The macroscopic difference in appearance can be illustrated with this example where a helicopter rotor blade separated due to fatigue. Blade failure occurred in a section with a lightning hole in the spar of the blade with a rivet hole at the top and the bottom of it. With the hole and the cracks left, reveal no macroscopic plastic deformation while the right side clearly indicates the presence of plasticity by overalization of the hole. The growth bands should not be confused with striations which are microscopic little ridges on the surface. The growth bands are visible with the naked eye and relate to the low spectrum applied. As mentioned earlier, the fracture surface is not entirely flat and little deflects is in the fracture plane will reflect light differently. This little difference in reflection is visible as growth bands. Although growth of cracks occurs mainly perpendicular to the principal stresses, in thin sheets and plates one may observe shear lips at the surface. At the surface, plastic deformation is less restraint which allows the fracture surface to tilt to 45 degrees towards the surface. This forms shear lips at the surface which for thin sheets may result in complete tilting of the fracture plane. In summary, in fatigue and metal cyclic slip results in the formation of cracks where the initial phase is dominated by surface conditions. Once the crack has propagated further the materials resistance is covered by bulk material properties. The fatigue limit represents a limit to fatigue failure. Initiation of the microscopic cracks may still occur below the limit and the effect of environment will be dealt with in a later learning unit of discourse but it clearly influences both the initiation and propagation phase. Factor surfaces contain information on the fatigue lowering applied which can be studied with electron microscopes." "Conservation Equations","https://www.youtube.com/watch?v=9q8nrfc2hrI","So we've had to look at the equation of motion for the restricted to body problem and derive the trajectory equation, which gives us quite a bit of information about the shape of a particular trajectory or the size of a particular trajectory and where the space out is along that trajectory. We're now going to add to the tools that we've already collected and we're going to do that by having look at the conservation of specific energy and the conservation of specific angular momentum. As we're doing that, we're also going to get our first glimpse into the relationship between the position of a spacecraft in its orbit via the true anomaly and the time aspect, just the first glimpse. We're also going to get a glimpse of the orientation, namely the flight path angle. And as we do this, we're going to demonstrate couplers three laws along the way. Just to get us started off on the right foot, we'll write down the trajectory equation, which can be written down, for example, in this form. And the trajectory equation describes all of the conjecture sections, so it describes the problems, it describes the hyperbola, the circle as a special form of an ellipse. So we automatically, by the derivation of the trajectory equation, have demonstrated couplers first law. So we're now going to take a look at the conservation of angular momentum. We're going to do that by taking the equation of motion and we're going to cross R into that equation of motion. So it looks like this. Now, that's a simple operation. And if we look at the right hand side, we effectively have some non-vector quantities, some scalars here, and R cross R. Well, R cross R is just 0, which means that R cross the second derivative of R with the spectatime is equal to 0. Okay. Well, R cross the second derivative of R with expected time is nothing other than the first derivative with the spectatime of R cross R dot. And why is that so? Well, that's just the product rule, isn't it? The derivative of the first one, cross with the second one plus the first one, crossed with the derivative of the second one. And there we have it, because this term, of vector cross with itself, is 0. So what we have is R cross R dot. Here in the parentheses, and the derivative of that is apparently equal to 0, which means that if you integrate it, that that is equal to the constant now, what we can do is write the R dot vector, and we can write that as the velocity vector. And as we've just said, if you integrate it, then it's a constant. And that constant, we call H, which is the specific angular momentum. Now, why is that the angular momentum? Well, if we look at what the angular momentum is, it's nothing other than the moment of linear momentum, mv. And if you take the moment of anything, that's the position where this second vector starts on. So we have R cross mv. And because we're going to work with the specific angular momentum, we divide H, the angular momentum by the mass to get the specific angular momentum, the H vector, which is R cross v. And because we know that the specific angular momentum is constant, and it's perpendicular to both the v velocity vector and the position vector. That means that the plane, which is defined by the R and the v vector, is constant space and perpendicular to H. So the motion is in a single plane. So let's consider the specific angular momentum a little more closely. First of all, we're going to look at the situation here where we have the path of a particular space graphed. And the path, the position on the path of that space graph is indicated, of course, by the position vector H. And the angle that it travels through in a particular amount of time, we can call data. That's a general angle. And we're going to look at the velocity vector of the space graph along the path. And the velocity vector has two components to it, a component along the radius vector. So we'll call that vr. And the component perpendicular to the radius vector, we'll call that v theta, because it goes in the direction. And v theta plus vr is v. So if we consider the specific angular momentum again, which is r, cross v, then by this definition the two components of v, we have r cross the sum of vr and v theta. OK. Well, we can expand the parentheses so that becomes r cross vr plus r cross v theta. And r cross vr, well, r and vr are going in precisely the same direction. And therefore, this quantity is 0. Which means that the H vector, the angular momentum, the specific angular momentum, is just r cross the v theta vector, the component of the velocity that is perpendicular to the r vector. And if we look at the magnitude of the specific angular momentum vector, well, then that's nothing other than the magnitude of r times the magnitude of v theta, the perpendicular component of the velocity. Well, and v theta is nothing other than r times v theta, dt. OK. So that gives us then r squared, d theta, dt. And as we've already discussed, the specific angular momentum is constant. So this quantity is a constant. Now the fact that this value is constant, the specific angular momentum, and therefore r squared, dt, is constant. That'll be quite handy in a moment, because we're going to consider the area shown here. In other words, the area that's swept out by the radius vector. But this area here that looks rather large. So let's first consider an extremely small angle. That's one that we can deal with more easily, because it's the shape of a triangle. Now, if we consider this triangle where this is the radius vector, and here's the angle theta, which will be extremely small, then this side has a length r cosine theta. And this side has a length r sine of theta. OK. So now we're going to consider the area shown here. The area shown here is nothing other than 1 half, the base times the height, which is r squared, cosine theta, sine theta. All right? Now what we're going to do is figure out what happens when this angle theta becomes infinitesimal small. Well, we get a dA, and that's 1 half r squared d theta, because for an extremely small angle, the cosine of theta, an extremely small angle. The cosine of theta becomes essentially 1 in the limit, and the sine of theta approximates theta in the limit. So very, very small angle. The sine of theta is essentially equal to d theta. All right? So when we consider the amount of area that's swept out per unit time, the amount of area that's swept out by the radius factor per unit time, well, we've just seen that for an infinitesimal amount of time, this is the area that is to say 1 half r squared d theta dT. But if we look at the result that we obtain previously, we'll notice that it's just 1 half the magnitude of the angular momentum, the specific angular momentum. And frankly, as we've seen, that's constant. And what we've just shown is that Kepler's second law is true. The amount of area that's swept out in equal times is constant. So now going to take Kepler second law and adjust it a little bit so that we can integrate for the area if the radius has swept out a full 2 pi radians, 1 full revolution. And for an ellipse, that means that the time has gone from say time 0 to the end of the period. So we're going to integrate over a time period that is one full period. Now, if we do that, here we end up with the full area of an ellipse and the area of an ellipse. Turns out to be pi times a times b where a is the semi major axis and b is the length of the semi minor axis. And if we integrate the right hand side, well h is a constant, that's the specific angular momentum. So we have h over 2 times the integral of dt. Well, that turns out to give us the value of the period. Now, after a bit of manipulation with the properties of an ellipse, the geometric properties are on ellipse. We discover that if we solve for t, then that gives us 2 pi times a square root of a cubed over mu. And this is how we calculate the orbital period. And having shown that this is how you calculate the orbital period, we've at the same time demonstrated coupleers third law. And coupleers third law states that the square of the period is proportional to the cube of the semi major axis. We're now going to take a look at some of the relationships between the VR and Vtata. And then, particularly we're going to take a look at this angle here between the velocity vector itself and the Vtata vector. And we're going to call that angle gamma. And the name of that angle gamma is the flight path angle. This is an important angle because we need to consider at times the orientation of the spacecraft. And we're going to do that with respect to these two axes VR and Vtata. And Vtata is also known as the local horizontal. In other words, it's perpendicular to the radius vector. Natural question to ask is how would we calculate the flight path angle gamma? Well, turns out that gamma is an angle in the triangle whose sides are Vtata VR and V. So we can take the tangent of that angle gamma. And that's of course nothing other than VR divided by Vtata. All right. Now as we've just mentioned, the velocity vector. Yeah, that has two components. VR and Vtata. And since their vectors when you add them up, of course, you get the velocity vector. That also means that if you wanted the magnitude of the total velocity vector, then thanks to Pythagoras, you would just take the magnitude of the radial component of the velocity and square it plus the magnitude of the transverse component of the velocity and square that. Fine. Let's take a look at each of these components then because if we want to calculate the flight path angle, then we might want to know a little bit more about each of these components. So, we'll first take a look at VR and VR just to point out is nothing other than our dot. Now we're looking at scalars here. It's important to be very clear about when you're dealing with vector and when you're dealing with a scalar. If we take a look at VR and we say that, well, that's nothing other than our dot, DRDT, then what you can do is you can take the trajectory equation and take the derivative of that with respect to time. Now if you do that, the derivative of the trajectory equation with respect to time, then after some manipulation that we won't demonstrate here, you end up with the following expression mu over h times e the eccentricity times a sine of data. So if you need to calculate the radial component of the velocity, then this is the expression that you use. What do we know about Vteta? Well, we've already seen that that is actually our time-stated dot. All right? But we know something about the specific angular momentum namely that that is a scalar r times Vteta. Okay, so that means that this quantity here is also h over r. Now, and if we fill in the trajectory equation for r, let me end up with the following mu divided by h times 1 plus e goes on of data. Where for the moment the angle data, we conclude as the true anomaly, which would normally be indicated with the letter nu. All right? But given these two quantities, the due components, we can now develop a relationship to calculate the flight path angle gamma. We're now going to develop an expression for the conservation of energy. In order to do that, we're going to take the r dot vector and dot it with the equation of motion. Now, if we look at the left hand side, we can observe because we've seen this manipulation before, that is nothing other than one half that derivative with respect to time of the r dot vector dotted with itself. And on the right hand side, we have minus mu over r cubed times r r dot. That's a relationship we've also seen before. All right, now let's on the left hand side. Let's write this as 1 half d dt. And all we're going to do here is write the r dot vector as a slightly more familiar v. So we have v dot v. And on the right hand side, we're just going to cancel out the r's. So we have r squared. And here we have r dot scalar quantities. Okay, now on the left hand side again, we just have 1 half d dt. And in parentheses, we have v dotted with itself while that's nothing other than v squared. And on the right hand side, we can make the observation that the expression here is nothing other than the first derivative with respect to time of mu over r. You can check that for yourself. I'll only take some moment. Okay, now on the left and the right hand side, we have derivatives with respect to time. So we'll just bring all of the terms to the left hand side. And then we end up with the following. So the derivative of this term with respect to time is equal to zero. Now all we'll do is integrate both sides with respect to time. And we see that this term here v squared over 2 minus mu over r must be equal to some constant. And that constant we're going to indicate with an epsilon. The constant there turns out to be the total specific mechanical energy. And that's slightly easier to see if we take a look at each of these terms. This term here will look a whole lot like 1 half mv squared, but the mass is divided out. That's because we're dealing with specific quantities. So that term there is nothing other than the specific kinetic energy. And this term here is the specific potential energy, where we've chosen the zero point, the zero reference point, to be at infinity. So where r is infinity, that's where the potential energy is zero. And if they enclose it then infinity, and you have a negative specific potential energy. So this equation here represents the conservation of energy. In this case written in the form of specific mechanical energy. And it is also sometimes referred to by another name called the vis-vis-vis equation. Now given this term for the conservation of mechanical energy which we've just demonstrated, we can observe that on an orbit, the mechanical energy is conserved. Therefore, every point on the orbit, the energy is the same. The only thing that's different is that for different radius, different position, you have a different speed. This turns out to be an extremely handy tool in our arsenal in addition to the trajectory equation and specific angular momentum. It would be quite handy actually to be able to characterize an orbit based on the amount of energy in that orbit. So we're going to take this equation and develop it slightly to come up with a handy value for the mechanical energy for each orbit. So let's take a look at that. E is vis-squared over 2 minus mu over r. Now, since on any orbit, the total value of this is the same, we can consider the right hand side and any point we like. For example, a pair of G. So we'll fill in the velocity at a pair of G or a pair of Opses if you will. And if we're able to fill that in here and there, then we end up having a total value, which is the total mechanical energy of that orbit and it's true anywhere, long the orbit. What we're going to do is we're going to use the fact that the magnitude of the specific angular momentum is RP times VP. We're going to use this fact in transforming this equation. So instead of writing VP squared, we'll replace that with the value of h squared over 2. RP squared minus mu. And we're going to multiply by RP here and in the denominator as well. So you just multiply by 1 there. All right. So that's the first step. Now we're going to make use of the following relationship namely that the semilatus rectum is equal to a times 1 minus e squared. But we know that it's also equal to h squared over mu. In addition, we're also going to use the fact that the radius at periapse is equal to a times 1 minus e. All right. Now if we do that, then we can get rid of the h term. We can also get rid of the rp term. We end up with the following. Now with a bit of additional manipulation of these expressions, which I won't write out an excruciating detail here. This term you can check for yourself can be reduced to minus mu over 2a, which means that the total energy can always be expressed as minus mu over 2a. And that means that it's determined entirely by the length of the semilator axis. That determines the energy of an orbit regardless of what kind of orbit it is. So the vis-vis-vis-vis equation, total specific mechanical energy, is written as follows. And this equation is valid for all conexceptions." "Spacecraft Technology: Data Busses","https://www.youtube.com/watch?v=dD7VwwlGRw8","Welcome back. In the first session, we learned that data links are a key element of command and data handling. A data link is provided by a physical bus, a data protocol, and electrical signals. The combination is often referred to as data bus. Although there could be some confusion in the terminology. We are going to look at the dominant data bus types implemented in spacecraft. The military standard 1553 is a very common in large spacecraft. People also simply call it mill bus. The data bus comprises of a bus controller and up to 31 remote terminals. The bus controller could for instance be an onboard computer. The remote terminal could be a controller of a subsystem or it could be a router connected to multiple other subsystems. It uses two wires for a differential signal. Remote terminals tap into the same physical bus. This bus typology is called a linear bus. It is also a serial bus since all data is transmitted sequentially. All devices use coupling transformers to connect physically to the bus. In this figure, you see a simplified version with only a single transformer. A practice that could be several passive elements depending on the distance between the tapping point and the remote terminal. The important aspect of the transformer coupling is that the main bus is protected against short circuits which can occur at the subsystems. As you can understand, this improves the overall reliability of the bus. The data of the mill bus goes up to one megabit per second. Here you see the data matches protocol for a transmission from the bus controller to a remote terminal unit. It all starts with a synchronization signal with a period equivalent to three bits. Why is this needed? Well, the remote terminal does not yet know at which exact frequency the data is provided by the controller. It needs to synchronize its clock first in order to be able to distinguish sequential bits. After the synchronization follows the address of the remote terminal. This tells for which remote terminal the message is intended. Next is a single bit which indicates if the controller will send a message to the receiver or expects a message back. This is then followed by a sub address which can be used for the devices connected to a router. Then it is specified how much which the message should contain. The message header ends with a parity bit which can be used to detect a single bit flip. In case of a transmission, the header is followed by the message of n times 16 bits. The receiver subsequently acknowledges the reception by transmitting its address and providing a status word to indicate good reception or faults. For receiving, there is a slightly different order in which the remote terminal first responds with its address and status word followed by the message word of again a multiple hour of 16 bits. There are also different sequences which only contain status words or allow communication between the remote terminals but in all cases the bus controller leads the transaction. Now let me explain you the purpose of a differential signal. A differential signal simply means that the signal on one of the two wires is identical to the signal on the other wire but in opposite direction. If you mess with the difference it simply looks like this. Now we look at the same signal but this time there is an external electromagnetic noise present which distorts the signal. The wires are very close so the electromagnetic disturbance coming from an external source is about equal and in the same direction on the other wire. If we look again at the difference we see a clean block again since the noise is canceled out. This is called common mode noise rejection. Here you can see the differential signal as measured on a logic analyzer. The pick to pick differential is about 28 volts. This is very high compared to water buses and also very power demanding. So why is this? Well the differential signal, the high voltage and shielding around the signal wires makes this data bus very robust against electromagnetic interference. To enhance reliability further this bus is typically implemented in a dual triple or even quadruple redundant configuration. The higher reliability is one of the most important reasons why this data bus which is already around since the 70s is still implemented in many expensive space craft. It will for instance be used in the juice machine to watch Jupiter's moons. If you have followed the course in space exploration you are probably very familiar with this mission. Juice is planned for a launch in 2022 and will arrive in 2023. Imagine that by then this data bus is already 60 years old. So let's take a look at a few alternatives. The ice-coated C-data bus is also a serial data bus with a linear bus topology. It uses one wire for a data signal and one for a clock signal. Boat lines are pulled up to a reference voltage by the use of simple resistors. The reference voltage is typically 3.3 volt or 5 volt. Similar to supply voltages of many integrated circuits. The master device controls the bus and communicates with up to 112 slave devices. The signal is generated by pulling down the lines to the ground. The data rate is for most practical cases limited to 400kbps. Thousands of integrated circuits have implemented an ice-coated sequencer ranging from microcontrollers to special purpose devices. Compared to other buses ice-coated sequencer consumes very little power. The maximum length of the bus is however limited to about 30 cm, making this bus unsuitable for large space craft. The availability and the low power consumption are however the reason that is currently the most popular bus for a class of very small satellites called CubeSets. CubeSets are satellites of one or multiple units of 10 cm cubed. Ice-coated C-data is for instance implemented in the successful delvisitory and delfinx cube sets which are developed and operated at the U-delft. Each data message starts with a start condition from the master followed by 7 or 10 bit address of the slave device. The read-write bit tells to slave whether it will receive data or it needs to return data. The slave then needs to acknowledge that is addressed and ready for the next action. Then the actual measures or return data will follow which is acknowledged after each bite. The message length can be up to 255 bytes. The message ends with the stop condition from the master. Here you can see how the data and clock-line signals look like. The advantage of the separation of data and clock is that the slaves don't need to synchronize to the data transmission frequency. This impreensible increases the reliability of the data bus. However, as this bus is not it is low voltage but not differential, both lines are very susceptible to electromagnetic interference and radiation events. The signal distortion on one of the lines can lead to a bit flip which might not be too problematic. However, it can also lead to a missing address bit or a false start or stop condition. In practice we see that the handling of such anomalies is sometimes poorly implemented in the integrated circuits and causes bus lockups to occur. In my own research on data buses I discovered that the majority of cubesets using iSquare C experience this problem with this data bus. In a few cases this is even resulted to a complete satellite failure. The last data bus we are going to explore in depth is space wire. As the name already indicates space wire is designed specifically for space applications by the European Space Agency. It uses a point to point bus topology which means that one links only to another one other device. One of these devices can however be a router which connects several other devices via space wire and other data buses. Data rates go up to 400 megabits per second. The bus uses differential signal like the mill bus. It has a data and stroke signal which is similar but not exactly the same as with iSquare C. I will explain this later. The bus is full duplex meaning that there are outgoing lines for data transmissions as well as incoming lines for reception. These lines can be operated simultaneously. The eight lines together with the shield line are wired for nine bin connectors. Space wire allows automatic rerouting of the data in case of failures. This of course requires redundant links and routers. The combination of high data rates and higher liability make it a very popular bus for modern spacecraft. It is simply implemented in FPGAs and A6 which are devices which will be explained in a later session. The disadvantages are that it requires quite some effort to implement it in existing systems and it consumes relatively high power compared to for instance iSquare C. The message protocol is quite complex and will take too much time to explain and understand. If you are interested you can find all the documentation on the website of ESA. We will now take a look at the signal properties of space wire. The stroke signal will only alternate its logic level if there are two identical data bits in sequence. This is called data stroke encoding. This means that for each bit either the data or the stroke signal changes its logic level but never both at the same time. If we now apply a simple exclusive or operation you can retrieve a clock signal as you can see on the bottom of the graph. The advantage of this approach is that it is simple method which yields more robustness against external and mutual interference of both signal lines. After all a change of both signals at the same time is not allowed and can thus be determined as anomaly. I will now briefly tell you about some other data buses which are implemented in satellites and some candidates for future satellites. The first one is the controller area network or canvas. Can is a differential bus developed for the automotive industry. It is designed for time critical functions. It's recent versions support data rates up to 5 megabits per second. In terms of performance, power consumption and reliability this bus takes the middle ground compared to the buses discussed before. Serial peripheral interface or SPI is a database which has closed resemblance to iSquarec. However, it does not use digital addressing but has a dedicated slave select wire purifies connected to it. Also it is full duplex. Its maximum data rate is only limited by the clock speeds of the master and slave device and can be up to several hundreds of megabits per second. For the rest, the advantages and disadvantages are similar to iSquarec. Time-tricket Ethernet is a variant of Ethernet as you find it in your wired computer network at home or at work. It is a modified version to be more robust and to allow time critical operations but it can be connected to terrestrial Ethernet devices such as a personal computer. Data rates currently go up to a hundred megabits per second. One of its main advantages is that it is an extension of a widely adopted standard for terrestrial application. You can even think about your satellite as a network of devices which can be addressed through the internet. We might see this bus in a near future in some satellites. Repedio is a database for computer systems with extreme performance for time critical operations. The throughput is up to 10 gigabytes per second for one lane and can even be multiplied by adding more lanes. It is implemented widely in mobile phone infrastructure for instance the equipment at cellular towers. The performance and its robustness makes this point to point data bus interesting for some dedicated space instrumentation with very high performance requirements. There are however many potential data buses which can be implemented in space craft. Space industry is looking more and more at the implementation of widely adopted terrestrial standards with all-wit-out modification. Also you might see some experiments with wireless communication inside a satellite in a near future. Think of Bluetooth or an NYFI as examples. The possibilities keep in mind that for larger and more expensive space craft reliability typically comes first place. It is therefore likely that you will see most of the newer data buses implemented first on small demonstrations satellites. This hands the session on data buses. Good luck with the exercises." "Spacecraft Technology: Failures in Electrical Systems and Software","https://www.youtube.com/watch?v=bbaP7zhRN-4","Welcome back. Did you ever experience that your mobile phone or PC stopped responding and you needed to push the reset button to get it back alive? Or words that it really got broke? I bet you did. What would happen if this occurs in a spacecraft? Well, the answer is simple. Unless you have a hundred million euro to set a few astronauts for unsaturated pair, it may simply be the end of your mission. In previous sessions, I already discussed reliability and some typical failure causes. In this session, we will elaborate on that. We will discover all kinds of ways to destroy a satellite. And also some ways to prevent it. I've put this session in the bottom command data handling, but as you all find out, many things discussed here can also occur on spacecraft subsystems. So let's first sum up the major failure causes in software and electronics. Software bugs and electrical design flaws are human errors which speak for themselves. They are, however, a major source of onboard failures. Radiation can damage electrical devices. I will elaborate on this later. Before the launch, components can corrode in human environments. In orbit, the extreme thermal environment can lead to thermal electrical and thermal mechanical stresses. Both can lead to open or short circuits or change the electrical properties of a device. During the launch, there can be extreme vibrations due to the awesome amount of power generated by the rockets. Finally, there can be component manufacturing errors or assembly errors of components. Here, you can see an example of how a bad soldier joint in combination with mechanical stresses can lead to an open circuit fault. While most failure causes are pretty straightforward like this example, we will pay some more attention to radiation. There is an abundance of particle radiation in space. In this picture, you can see a solar eruption which fires in enormous amount of protons and ions into space. These eruptions occur all the time, but the sun experiences cycle of about 11 years with an active maximum and the passive minimum. Still, the activity and eruptions are very hard to predict, let alone the direction of the particle blast. Here you can see the vanilla embellts. The inner belt has protons trapped and the outer belt has electrons trapped. It is the U.S. magnetic field which keeps them trapped. The protons are more energetic and harder to shield against than the electrons. Please take note of an area above the south Atlantic where the inner belt comes very low and also impacts the lowest earth orbits. This is called the south Atlantic anomaly. In this graph, all the detected radiation events of a satellite called U.O. set 3 are plotted. This clearly shows the impact of the south Atlantic anomaly. While the magnetic field of the earth is to blame for the vanilla embellts, I should also note that the same magnetic field protects the earth and low earth orbits from the majority of particle radiation coming from the sun. Only near the pulse, some of the solar particle radiation still produce leading to a beautiful light phenomenon known as the Arora Borealis. Cosmic rays are a third source with mainly protons and ions. They come from all kinds of sources in the universe. Radiation, hitting a surface of a spacecraft structure and component can also lead to secondary radiation. A high energy particle can also create brain stralum comprising electromagnetic radiation, electrons and ions. It's like the white billion ball hitting a pool of nays arranged other balls like you show and like a shown in this picture. So now we know about the radiation environment, but what does radiation do to the electronics inside a spacecraft? Charging means that there is a build of electronics, but an insulate that prevents them to flow. This creates a voltage difference which can lead to bising of transistors. It can also lead to sparking if the voltage level becomes too high for the insulator to withstand. Sparking can lead to transitions in the circuits with potential harmful consequences. A proper grounding of the body and the solar panels can prevent most charging issues. ionization means that atoms or molecules within the electronics lose or gain an electron due to a hit with the radiation particle. A single ionization typically does not significantly change the properties of an electrical component, but the build up over time can change the thresholds, tell us, and lead to leakage currents. Single event effects are events which immediately take place after radiation particle strikes, an upset is a change of state of logic component. This is for instance a bit flip in the memory of software. A ledgerop is a short circuit in a component which can be caused if the heat released by a particle changes the properties of the material locally. If the ledgerop sustains, it can have a cascading effect on the other parts of the circuit or integrated circuit. A rupture or burnout is a direct destruction of the component or logic unit in an integrated circuit. There's not so much to do about the latter, but luckily the chance is also limited. A measure to quantify ionization is the total ionization dose, which is expressed in radiation absorbed dose or rot. Radiation hardened electronics can operate to much higher doses of ionization. Commercial of the shelf integrated circuits typically can sustain between one and ten killer rats before they start to malfunction or completely stop working. Manufacturers, however, don't provide you any data on the exact level. Radiation hardened electronics can sustain between a hundred and a thousand killer rats. Manufacturers provide you the levels which can be taken into account in a design. Radiation hardening is a combination of local shielding and the different transistor layout. The transistor is our larger and consume more power than their commercial equivalence. Since the market is orders of magnitude smaller and the production process is more complex, prices are between a hundred and a hundred thousand times higher. So the question is, do we really need it? First of all, aluminum is a good shielding material for a part of the radiation. This lightweight material can be used for the outestructure of the spacecraft and will shield the electronics inside. The thickness of the panel determines the amount of shielding. You can also apply aluminum boxes around your electronics for even more protection. Take note, however, that shielding, a very close to the actual electronics, can potentially also lead to a larger flux of frame straddle. Now let's take a look at the study of ESA. On the horizontal axis, we see the thickness of the aluminum shielding. On the vertical axis, we see the total ionization dose on a lacquerid mix scale. The lower purple plot is an example of a low-earth orbit mission of 80s. With about 3 mm of shielding, the total ionization dose would remain below 10kW. If your mission time is sufficiently small or you can allow a bit of risk, commercial of the shelf electronics can be sufficient in this case. For a geostationary orbit at 36,000 km altitude and a mission lifetime of 18 years, you need at least 10 mm of shielding to get down to 10kW. This is not impossible, but since geostationary satellites are very expensive and perform critical tasks for society, commercial after shelf components are typically avoided because of risk of erashing. The upper two lines are for juice mission at the moons of Jupiter. As you can see, even 10 mm of shielding will yield 200kW of total ionization. For this mission, commercial of the shelf electronics will simply not surface. Now let's discuss a led shop. A radiation particle can create a parasitic structure in an integrated circuit. On some locations, this can lead to a short circuit. Led shops are triggered by heavy ions, protons and neutrons. CMOS is most susceptible to Led Shop. CMOS stands for complementary metal oxide semiconductor. It is a very popular semiconductor type as it used very low power consumption compared to waters. This susceptibility can be mitigated by using silicon on insulator substrates. For the integrated circuits, the insulator in the substrate prevents parasitic structures which can cause a led shop. It is even somewhat faster and more efficient than CMOS on bulk silicon substrates, which is the good and mainstream. The manufacturing cost for silicon on insulator increases with about 10%. For space-grade components, this is peanuts. But for bulk commercial electronics, this is still a barrier with its preference major availability. The good news, however, is that this technique allows for further immuneterization and speed improvements. And we can expect more commercial electronics to make the step towards silicon on insulator. But is there also a way to simply deal with a led shop? Well, if you implement fast and adequate detection and power cycling, a major part of the led shop can still be resolved in time. After this power cycle, the parasitic structure is gone. This mechanism can be designed around the integrated circuit, but can also be part of the integrated circuit. For commercial of the shelf electronics, however, you typically don't know if such mechanisms are embedded. So we know now how it's like and fail. We also know a few ways to reduce the chances of failures. But what else can we do? The first measure is to apply redundancy. Spec systems or components can replace broken ones. This sounds simpler than it actually is. For instance, does the spare have to be in the same state as the primary, or is it okay to start fresh in a default configuration? Keeping the state of a spare updated is rather complex and requires power. Secondly, spares do not resolve human errors in a design and testing, especially if the spare is identical. Also, the total ionization dose in the spare will be about equal to the primary. For some of these reasons, you can opt for a backup solution which is slightly different to the nominal device. But this will make your design more complex. Nonetheless, if properly implemented redundancy can lower the overall system failure probability. To properly implement redundancy, you might need onboard autonomy to detect the failure on a subsystem or component, isolated and switch over to the backup system. Faded out detection, isolation and recovery abbreviated as FDAR can also be used to deal with soft errors such as data, buswackups and anomalous behavior of software. FDAR can also be used for graceful degradation. Think for instance of dealing with less solar power due to a failure of part of the solar array. Graceful degradation simply means that you accept less performance and can still continue your mission as critical functionality still remains. It's like your human body once you have passed age of 25. A lot of trouble can be prevented by proper components selection. We already talked about radiation hardened electronics. There are also full tolerant components for instance integrated circuits which compute the same step twice or more and compare the outcome. Offerances internal lecture protection and FDA are in an integrated circuit. However, if you want to use commercial of the shelf electronics, it might be hard if not impossible to find specifications of radiation tolerance and internal failure measures. However, before screening of those components by testing them in a radiation particle accelerator facility and to a recycling chambers. This way, the missing elements of the specification of components can be discovered. Finally, a very popular measure is to select components which flight heritage. So limited flight heritage does not provide significant statistical output which can be used to assess reliability quantity, many potential design and testing flaws which could lead to infant mortality as well as the ability to total ionization dose becomes less concern. For large expenses spacecraft, the longing for flight heritage keeps them sometimes very conservative and prevents innovation. Even for cubeset nowadays, flight heritage becomes a major trade of criterion. Confirmal coating is a thin polymeric film which is applied over a fully assembly assembled electronic board. It protects the board against the environment such as moisture and provides extra structural rigidity to the circuit. It also improves the thermal handling capability. The improvement in reliability may be subtle, but since it's not so difficult and expensive to do, it may still be worthwhile. The most important measure to improve liability however is testing. I know I'm kicking in an open door here. However, you may still be surprised how often in space projects the testing phase is too limited. Testing is the last thing you do before the delivery to the integrator or the launch provider. So it becomes the number one victim when budget or time runs out. It is not only important to do extensive and complete testing, you even have to take testing into account when designing a spacecraft. When designing for instance software or electrical circuits, you have to consider how you can test each part individually as well as integrated. Take also into account that the redundancy and after could become a hazard itself if not properly tested. While these failure mechanisms might also be the most comprehensive to test. So if there is something to remember for this session in a few years from now, it is testing, testing and testing. Well, this ends the part on commented data handling. I hope you have enjoyed it. Thank you." "The Trajectory Equation","https://www.youtube.com/watch?v=XmR2JG2FTZg","Hi there, welcome back. At this point, your head is stopped spinning. Well, mostly. And you've got a grip on how we obtained a solution to the restricted two-body problem. We're now going to examine and apply that solution, known as the trajectory equation, which will tell us a great deal about the basic forms of both closed and open trajectories. So trajectory equation is a scalar equation, which tells us what the radius is as a function of a number of parameters. Depending on the value of the parameter E, the resulting trajectory can take on a number of different shapes. These shapes are called conexsections, and we'll start by considering one of them here, the ellipse. In the equation, H is the angular momentum, and mu is the gravitational parameter, which is the universal constant G times the mass of the central body M1. In the restricted two-body problem, for which the trajectory equation is valid, both H and mu are constant. It's convenient to represent H squared over mu with a letter P, which is called the semi-latis rectum, or sometimes just the parameter. The semi-latis rectum is perpendicular to the major axis of the ellipse, and represents the distance from M1 to the point where it intersects the ellipse. Next we note that A is half the length of the major axis, called the semi-major axis. A determines the size of the orbit, and is one of the six key constants called Keplerian orbital elements, which allows us to fully specify a two-body orbit. The second Keplerian orbital element is the eccentricity E. The eccentricity determines the shape of the orbit. For values of E, less than one, the trajectory takes the form of an ellipse, which includes the special case of a circle for which E is zero. The third orbital element is new, which is called the true anomaly, and this is the angle between the shorter segment of the major axis and the radius vector, which indicates the position of M2, the body whose motion were actually interested in. You'll notice that the semi-major axis A is currently missing from the trajectory equation. Given that it's one of the Keplerian orbital elements, it would be handy to have it in there as well. So we're going to do a bit of analysis, put A where we want it, and develop a few handy relations along the way. First, we're going to look at the point on the trajectory, where M2 is closest to M1. This is called the periapsis. Note that this point can go by other names, depending on which central body is M1. If M1 is Earth, we call the closest point periG. If M1 is the Sun, we call it peri Helian and so forth. In any case, at periapsis, the true anomaly, new, is zero. When we substitute new is zero into the trajectory equation, we see that the length of the radius at periapsis, called RP, is equal to P over 1 plus E. Similarly, we can look at the point where M2 is furthest from M1, which is called the APO-apsis. Just as before, we could call this point APA-G or Affilian for the Earth or the Sun, respectively. At this point, the true anomaly is equal to Pi. We substitute Pi for new in the trajectory equation and see that the radius at APO-apsis is equal to Pi over 1 minus E. Now, if you add up the lengths of RP and RA, they have the same length as the major axis, which is 2A. So we set that in an equation, which, after substituting from above, contains APA-ND. Then we solve for Pi, which is equal to A times 1 minus E squared, and substitute that back into the trajectory equation. Now we have an expression for R, which only depends on the three kept laryan orbital elements, A, E, and new that we introduced above. Using comparably simple manipulations, another handy relationship can be derived for the eccentricity E. Go ahead and drive this one yourself. It's good practice, and it shouldn't take long at all. At this point, we've covered three of the six orbital elements. The remaining three describe how the elliptical trajectory is oriented in three dimensional space, and we'll cover those later. Nevertheless, with the trajectory equation in hand, we can already conduct a variety of interesting analyses. Let's immediately put our new tools to work on an actual satellite. Let's consider SloshSat, a Dutch experimental satellite whose purpose was to test the dynamics of fluid in orbit. Designed for a 10-day mission, it was launched in 2005 on an Ariane 5 rocket from Kuru in French Guiana, where the primary launch site of Azaz located, that's the European space agency. It's mass was 127 kilograms. You can access this type of information via numerous reliable sources, such as NASA, SpaceTrack.org, or Wolf from Research, via Mathematica, or Wolf from Alpha. You can do this using an identification number. SloshSat's ID number is given here for two different catalogs. The SATCAT, or NORAD catalog number, designated by the US Air Force's US Spacecom, and the International Designator number administered by NASA's National Space Science Data Center. As a point of interest, the NORAD catalog number 1 refers to the final stage of the rocket which launched Sputnik 1. The first artificial satellite ever launched in 1957. The next success kicked off the space race in earnest, which eventually culminated in the six Apollo Moon landings. But I digress. Here we have some key orbital data for the satellite, which I obtained via Mathematica. These include the gravitational parameter for the Earth, the average radius of the Earth, and the altitude at both perigy and apigy. Note that the figures and results are rounded off for this example. Given these data, we're going to find the length of the radius at perigy and apigy, the semi-major axis, and the eccentricity of the orbit. As we've discussed, the radius at perigy is the sum of the radius of the Earth and the altitude at perigy. So RP is 6,650 kilometers. Similarly, the radius at apigy is just under 40,000 kilometers. Now think about this for a moment. This is a decent back of the envelope approximation, but it's certainly not exact and for a variety of reasons. One notable area for improvement is the fact that we cannot be sure given this information that the perigy altitude was measured with respect to the average radius. Perhaps the equatorial radius or some other reference radius was used. Always remember to remain professionally skeptical and understand what you're dealing with. What your information is based on. Now given RP and RA, we can calculate the semi-major axis to be about 23,000 kilometers and the eccentricity to be about 0.71. Those give us the size and the shape of the orbit. These results can be checked against the reference sites I mentioned, which means that you have a virtually bottomless pit of practice material to work with, just what you were looking for. In addition to conducting handy back of the envelope calculations as we've just done, we can also use the trajectory equation to make a number of observations about the general nature of elliptical orbits so that we can get a feel for what's going on. Here we see the variation of the radius for the full range of the true anomaly and for three different eccentricities. The red orbit with the smallest eccentricity of 0.01 shows a nearly constant radius. This orbit is almost circular. If we look at the other two orbits, we note that as the eccentricity increases, the variation in radius also increases. For the most eccentric orbit here, the radius at Apigy is nearly 10,000 kilometers, whereas its peridge is perhaps 5,200 kilometers. Hold on a minute. Their trajectory equation tells us that the radius at peridge is 5,200 kilometers, but don't forget to use your common senses in engineer. The radius of the earth is 6,370 kilometers, give or take. So this orbit is not feasible in the real world. This is a good point to reemphasize that you should always be clear about what you're dealing with. In this case, it's the radius we're talking about, not the altitude. It's no coincidence that the horizontal axis on this plot is placed where it is. This makes sure to help yourself and make it as easy as possible for your audience to digest what you're telling them. Let's get a feel for what happens to the velocity as well. Here we've plotted the speeds on the same orbits against the true anomaly, so that we can see where our spacecraft speeds up and where it slows down. For the red orbit with the lowest eccentricity, we see that the variation in speed is minimal. Similar to the variation in radius before. And the greater the eccentricity, the greater the difference is in speed along the orbit. Note also that at peridge, where the true anomaly is zero, the speed is highest. And at aboggy, where the true anomaly is 180 degrees or pyradians, the speed is lowest. We've already seen that the most eccentric orbit is not feasible because of the rather pesky detail that Earth's surface gets in the way. So let's look more closely at the delved blue orbit with an eccentricity of 0.1. The peridge altitude for that orbit was 6,750 kilometers and the Apiggy altitude was 8,250 kilometers. Now imagine that your spacecraft starts out in a circular orbit at 6750 kilometers. You want to transfer a to a different circular orbit at 8,250 kilometers. What would you have to do to make the transfer between these two circular orbits happen? Well, one thing you can do is use an elliptical orbit to go from the lower to the higher altitude. Let's first consider the speed of our spacecraft at the lower altitude. On a circular orbit, the speed is a constant 7.68 kilometers per second throughout the orbit. But on an elliptical orbit for which the peridge altitude is also 6,750 kilometers, the speed of peridge is 8.06 kilometers per second. If our spacecraft starts on the circular orbit and we want it to be on the elliptical orbit, then we have to change its speed by 0.37 kilometers per second. We call this change a delta V. Once we've done that, we consider the situation when our spacecraft arrives at Apiggy. And we can calculate that its speed, the speed that it has on the elliptical orbit at Apiggy, is 6.59 kilometers per second. And the speed it needs to have to be on a circular orbit at that same altitude is 6.95 kilometers per second. So when it gets to Apiggy, we need to change its velocity again. This time it requires a delta V of 0.36 kilometers per second. In other words, if we want to transfer from a lower circular orbit to a higher one, using an elliptical orbit that is just tangent to both, then we need two delta Vs, one of peridge and one of Apiggy. If we add these delta Vs up, then the total delta V required for this transfer maneuver is 0.73 kilometers per second. So how do we make this delta V happen? Well, some form of propulsion should do the trick. And given this delta V, we could take the ideal rocket equation and figure out how much propellant we'd need to make it happen. We're making good progress now, through trajectory equation tells us quite a bit about a spacecraft motion in an elliptical orbit. Once we add a few more tools to our toolbox, we'll be well equipped to understand how we get around in space. And you know, you can't be an orbital mechanic without a toolbox. See you next time." "Spacecraft Technology: Ideal Rocket Theory (part 1)","https://www.youtube.com/watch?v=5eEOkrRvHLc","Welcome back. In any engineering design problem, it is extremely useful to derive a simplified model that describes the physics of the system. Such simplified models are a great support to the preliminary design phase and help to understand how the different design parameters influence the system performance. The same applies to space propulsion, for which the most commonly used simplified model is known as ideal rocket theory. In this video and in the next one, we will take a closer look at the assumptions, equations, and implications of this model. What is the objective that we expect to achieve with the ideal rocket theory? You certainly remember from the previous video that we want to find equations for three important flow parameters. The jet velocity, the mass flow rate of propellant, and the exit pressure at which the propellant is expelled. We will derive these three equations by means of a model based on two main simplifications. The first one is related to the rocket geometry, while the second one is related to the physical assumptions we make to simplify equations. In this course, I will show you only the building blocks of the ideal rocket theory, and the final equations obtained by combining this building blocks without any explanation of the intermediate steps and mathematical derivations. Keep also mind that the ideal rocket theory applies only to propulsion systems based on thermal expansion of the propellant, and cannot be used in other cases, such as for example, most of the electric propulsion concepts. Let's first take a look at the ideal geometry that we will be used to derive equations. In this figure, you can observe that we are considering only the final part of the propulsion system, where the heating and expansion process of the propellant takes place. In the combustion chamber, the propellant is normally a type ratio, high temperature, and very low speed. Don't that the world's combustion and high temperature are in brackets, because not in every propulsion concept the combustion takes place, or, more generally, the propellant is heated. The propellant is then accelerated in a convergent divergent nozzle, where no additional energy is usually provided, and what happens is simply a conversion of the propellant pressure and temperature into kinetic energy. We can highlight three, particularly important nozzle sections, for which in our notations we will use three different subscripts. The in-latch section denoted by C, where the propellant is assumed to be at the same conditions as the combustion chamber. The nozzle trod denoted by anastrisque, which is the smallest section at the end of the convergent and the denlet of the divergent, and the nozzle exit denoted by E. One very important geometrical parameter of the nozzle is the expansion ratio, defined as the ratio of the exit area to the trod area. Here is now a very long list with all the physical assumptions on which the ideal rocket theory is based. Let's go very shortly to all of them. The propellant flowing in the nozzle is considered not only a perfect gas, but also a calorically ideal gas, meaning that its specific kits are not dependent on temperature. Furthermore, the chemical composition of the gas in the nozzle is assumed to be constant. The flowing nozzle is assumed to be steady, meaning that no dependence on time of any quantities considered, and isentropic, meaning that no energy exchange between the fluid and the external environment take place. We consider a monodemational and purely axial flow, meaning that all quantities vary only along the axial direction, and that the velocity is purely axial everywhere in the nozzle. Finally, we assume that no external forces and in particular no friction act on the propellant in the nozzle, and that the initial velocity of the propellant in the combustion chamber is negligible, and thus can be taken equal to zero. Based on these assumptions, we can derive the building blocks used to find the ideal rocket theory equations. The first group of building blocks are the so-called conservation equations for mass, momentum, and energy. The conservation of mass means that no mass is generated or lost within the nozzle. Under the ideal rocket theory assumptions, this means that the mass flow rate shall remain constant everywhere in the nozzle. The mass flow rate in turn can be written as gas density times velocity times not the area. Conservation of momentum means that the flow pressure, density and velocity are continuously linked to each other by means of the equation you see in the table. The conservation of energy, finally, implies a relationship between the flow and the OP and velocity. Another set of building blocks can be obtained from the equations for a perfect calorically ideal gas. You probably already know the equation of state of a perfect gas, which provides a relationship between its pressure, density, temperature and molecular mass. Since we have assumed the isentropic flow, we can write an additional relationship that relates the flow pressure and density. In this relationship, a role is played by the quantity indicated here by gamma, which is the specific heat ratio of the gas, of the ratio of the constant pressure specific heat to the constant volume specific heat. There are our assumptions, we can also write an enthalpy as a function of the constant pressure specific heat and the temperature. The constant pressure specific heat is a property of the particular gas we are considering and can be calculated as a function of the specific heat ratio and the molecular mass. The Mach number is simply the ratio of the flow velocity to the speed of sound, which for an near gas can be easily calculated as a function of the other physical parameters. A Mach number higher than 1 means that the flow is supersonic, while for a Mach number lower than 1, the flow is subsonic. All equations that we will see in following are obtained starting from 1 or more of this building blocks combined together in different ways. Exactly like Lego blocks can be combined in different ways, top-dained, many different shapes. But first, there are still two assumptions we need to make in order to complete the picture. This is, once again, our ideal rocket geometry and these are the flow conditions in the combustion chamber up to the nozzle in let's section. We have already seen that the chamber velocity and thus Mach number is assumed to be zero. But what about the other chamber conditions? For the moment, we assume to know the propellant pressure and temperature in the combustion chamber. We will see in following a few more details on how these quantities can be estimated in different types of rockets. We assume to also know the other propellant properties, molecular mass, specific heat ratio and constant pressure specific heat, which as another consequence of our assumptions are constant everywhere in the nozzle. Have you ever asked yourself why the nozzle over rocket is convergent divergent? Is there any special reason for this particular shape? This is a question we can easily answer with the help of the ideal rocket theory. By combining our building blocks, it is possible to derive this equation, which shows that the area variation to the nozzle DA is strictly related to the velocity variation DV through the Mach number. In the convergent part where the nozzle area decreases and thus DA is negative, the flow can be accelerated with the positive DV only when it is subsonic, so the Mach number is lower than one. In the divergent part, the nozzle area increases and DA is positive. Here the flow can be accelerated only when it is super sonic. Thus, to make the propulsion system effective and accelerate the flow continuously and everywhere in the nozzle, we need the subsonic convergent and the supersonic divergent. This also means that the flow will be sonic at the nozzle throat, which is a very important characteristic of all nodules used in rocket propulsion systems. We are now ready to discuss equations for the three flow parameters that remember were our initial objective for this video. We start with the jet velocity, which can be calculated by means of this equation. A high jet velocity, desirable for a better performance of the system can be achieved in different ways. High chamber temperature is beneficial for the jet velocity, as well as low molecular mass. This is quite obvious, considering that lighter molecules are easier to be accelerated to twice speeds. A higher jet velocity is also obtained with the lower ratio of the nozzle exit pressure to combustion chamber pressure, or in other terms, when the flow is expanded more starting from the same chamber pressure. That's now take a closer look at the mass flow rate. Here is the equation for this flow parameter, where the role is played by the Vanderke curve function of the specific iteration. We know that a high mass flow rate is beneficial to achieve a high thrust level. This result can be obtained with the low combustion chamber temperature or a high molecular mass. Note that this is exactly opposite to what you need to achieve high jet velocity. High mass flow rate can also be obtained with high chamber pressure, or with the high nozzle trot area. Remember that, for an effective combustion divergent nozzle, the flow needs to be sonic headed trot. With a given trot area and given chamber conditions, sonic trot is made possible by only one specific value of the mass flow rate, the value given by this equation. The flow is therefore controlled by the nozzle, or in other terms it is chalked. For the exit pressure, unfortunately, the situation is slightly less straightforward. It is possible to derive this equation, which relates to the specific iteration, the nozzle expansion ratio to the exit pressure chamber pressure ratio. This equation is implicit and cannot be solved directly for the pressure ratio once the nozzle geometry is known. It needs to be solved numerically, or by trial and error or graphically. Here is an example showing the relationship between expansion ratio and pressure ratio for three different values of the specific iteration gamma. Note that the dependence of the curve on the specific iteration is relatively weak. High expansion ratio means that the flow has more room to expand, and thus a lower exit pressure can be achieved for a given chamber pressure. Remember that lower exit pressure means higher jet velocity. When the chamber conditions and nozzle geometry are fixed, the nozzle exit pressure is fixed. However, this nozzle can work under different conditions, depending on the altitude and thus the ambient pressure. Three different cases are possible. When the exit pressure is lower than the ambient pressure, we have an overexpended nozzle since the flow has been expanded too much with respect to the external ambient conditions. When the exit pressure is exactly the same as the ambient pressure, the nozzle is adapted. When the exit pressure is higher than the ambient pressure, the nozzle is under-expended. If the exit pressure is different to the ambient pressure, the flow adjusts to ambient conditions by means of a set of shock waves immediately after the nozzle. It is possible to show that, for a given nozzle geometry, the thrust is maximum at the particular altitude and ambient pressure conditions where the nozzle is adapted. Then now derive the equations for the three flow parameters that we needed and thus achieve our objectives. In the next video, we will see a few other important rocket performance parameters. Thank you for your attention." "Moody's KMV Model","https://www.youtube.com/watch?v=f7YGP5oZqPU","Hi there, in this video lesson we will deal with MoodisKMV. MoodisKMV is one of the most important industry models out there for the estimation of the probability of default of a counterpart. Or if we want to use Moodis terminology instead of the PD we will deal with the EDF, the expected default frequency. Now the expected default frequency is nothing more than the probability of default of our counterpart over a one year time horizon. MoodisKMV is a structural model of the fault that originates from MoodisKMV tries to overcome many of the weaknesses of MoodisKMV. For example, we substitute the normal distribution that you know is the distribution according to which we compute the probability of default of a counterparty under MoodisKMV with another distribution which is empirically computed. This new distribution allows for better tales, so for more extreme events and you know that these can be much more plausible than the faint tales of a normal distribution. Then for what concerns the liability level capital B under MoodisKMV is substituted with more realistic liability structure that takes into account intermediate payments and not only the zero coupon bond with maturity, capital T and phase value, capital B. And this also allows for the possibility of default before maturity, not only at maturity as in MoodisKMV, we introduce a quantity called the distance to default that tries to simplify the relationship between the market quantities that we use as an input to compute the probability of default of a counterparty and the probability of default. We can use MoodisKMV in order to understand the most important characteristics of MoodisKMV. First of all, let's define the EDF according to MoodisKMV. This is the probability of default within one year. So in order to obtain this, we can start from the quantity we know. We set capital T equal to 1, so you see T disappears from the equation. And then we use the symmetry property of the normal distribution so that we can express our probability in terms of survival function. We then substitute capital B that is the liability level according to MoodisKMV with a capital B tilde, which is a more representative quantity of the complexity of the liability structure of a company. We are here considering all the liabilities that are payable within one year. So also considering all the intermediate payments. Then we substitute the entire argument of the survival function with a quantity, the distance to default that we will define in a few minutes. Finally we substitute the normal survival function 1 minus capital phi with an empirical survival function. This is essentially the way in which we can move from MoodisKMV. As in MoodisKMV, the quantity is V0 and Sigma V are not directly observable and they need to be inferred from data. The starting point is more or less the same. We exploit the European called behavior of equity of ST. To be more exact, MoodisKMV does not exactly rely on the standard formula for a European call the one we have used so far under MoodisKMV. But the rather use a property function that includes the formula of the European call, but also adds extra arguments. Like for example the quantity D, that is the leverage ratio of the company under scrutiny and the quantity C that is the average coupon paid by long-term debt of the company. If this information is available or of a homogeneous group of companies similar to the one we are interested in. Then thanks to a negative procedure, a negative algorithm, we can compute the quantities Sigma V and V0. That are the two quantity we still miss that we need in order to compute the distance to default, a quantity that we are going to introduce in a minute, which is the basis, the fundamental quantity for the estimation of the EDS. As said, MoodisKMV tries to overcome some of the weaknesses of MoodisKMV. For example the idea that default can only happen at maturity. This is a quiet strong assumption. MoodisKMV is not true. We are considering the possibility of intermediate default. And for what concerns the asset values, we know that asset values are not necessarily log normal as it is assumed by MoodisKMV. In fact the empirical literature shows that very often asset values have heavy tails, meaning that large deviations are much more probable than what we would expect under a log normal distribution if we consider asset values or a normal distribution if we consider the logs. Starting from this point of criticism, MoodisKMV introduces a quantity called the distance to default, DD as an acronym, which is probably the most important quantity under this approach. The DD may appear as a simple ratio, the one you see on your screen, but in reality is the result of a careful analysis of the default phenomenon. In this quantity for example you see the B tilde we were speaking about before that is the new threshold we define and it often represents all the liabilities that are table within one year. In MoodisKMV the distance to default, this guy is used to approximate the argument of the survival function. In MoodisKMV we use something else. We are going to see in a minute. If you are asking yourself how this substitution is possible, just notice that the difference of log V0 and log B tilde can be approximated by the expression you see on the screen V0 minus B tilde over V0. For what concerns the difference between new V and half of the variance of the asset sigma square V and Pricle evidence shows that this difference is negligible very very close to 0. As we have already said, MoodisKMV does not rely on the normal survival function, but rather on an empirical survival function, which is estimated on a huge historical data set. This data set collects the proportion of companies defaulting for different values of the distance to default and for different times horizons. So here we are. In front of us we can collect all the ingredients we have been talking about in the last minutes and we discover that the expected default frequency according to K and V, remember the probability of default within one year. It's nothing more than the probability we obtain by applying the empirical survival function we can estimate from the data to a specific value of the D that will be the measure on the basis of which we can make our evaluations. Two very different companies that share the same distance to default will essentially have the same EDF, the same probability of default over one year. Since I like to repeat stuff in order to make you understand, in front of you you see a graphical another graphical representation of how we can move from MoodisKMV. As you see we have the different ingredients, the liability capital B is now substituted by the liability threshold capital B tilde. The probability of default is now what we call the EDF okay but this is just a minor change. But most of all the log normal distribution, the normal distribution in the logs, it substituted by an empirical counterpart by an empirical survival function that we can estimate from data and why in MoodisKMV, this is not really relevant. Actually you can still prove that this is true but it's not really relevant because the PD depends on the DD, the distance to default. As MoodisKMV is a capital market model that uses information from the market to compute the probability of default of accountability. In fact the DD, the distance to default incorporates information about the equities, about their value, on the market. These makes MoodisKMV react quickly to changes in economic prospects of the counterparty and the DD also incorporates information about the macroeconomic scenario in which we are making our evaluations about the probability of default or the EDF in this case because you can imagine that the prices on the market incorporates expectations about the economic situation in which we are making all our evaluations. Being a capital market model, one of the limitations of MoodisKMV is that it is typically available for traded companies, companies that are listed on the market. So for the small company behind the corner it's quite difficult to use MoodisKMV, at least in this version. Moreover since these models reacts quickly to changes in the economic prospects. On the market, essentially it can be affected by the problem of prosciplicality that you know, for example when we deal with valid risk and all the discussion about the prosciplicality of valid risk, it's quite a relevant problem when we deal with credit risk. Once again, more details can be found on the course platform. For the moment I want to thank you for your attention and say goodbye." "Characterizing Fatigue Damage Growth","https://www.youtube.com/watch?v=c7jf0XDzE9k","Imagine. You come home and your partner points her finger at you asking, what's happening with our money? You're surprised because you didn't realize there was a problem with your savings. But then she shows you this chart which indeed looks disturbing. Since you don't recall a problem last time you checked, you open again the balance of your bank account to see this chart. And now you understand the problem. Although the data is identical, the format of both graphs is different. They convey different messages. Hence it's not solely the data here but to great extent the perception that is communicated. Probably you will respond now with here, I knew that. But do you? Whenever you analyze fatigue, correct growth data, do you realize what the format of your own chart is doing with your own perception? Let us look at the following example. We have an object that travels along a straight line which started with an initial velocity. Now we measure the distance of the object that it travels as a function of time. We can do that for say 3 initial velocity to obtain the following data set. In a graph that will look like this. Now we are interested in making predictions so we are looking for a format that we can work with. So let us plot this graph in a double logarithmic form. Here's magic moment number one. Whenever we see an apparent linear trend in our graphs we tend to get excited. In different of the scales of the graphs. So we draw trend lines through the data and get the following power law functions. There's an influence visible of the initial velocity so let's call that the initial velocity effect. Now to get to a prediction model we can generalize the power law function to this simple equation in which both the capital C and the exponent and are a function of the initial velocity. So we plot both against the initial velocity and we get trend lines. Now the trend is over excels that we can evaluate different trend lines to find the best fit. Here's magic moment number two. If the R squared value is high say above point nine we are generally satisfied. This trend line must be good. And now we can make predictions so let us draw the paper on the assessment of the initial velocity effect in describing the distance of an object traveling along a straight line. However, there is another approach to this problem. Apparently not only time is the governing parameter here but also the initial velocity. So when we go back to this graph we can try to find the parameter in the x-axis that includes both time and the initial velocity. Here we are with time times square root of initial velocity all three curves collapse. And this is magic moment number three. When multiple curves collapse to a single curve we really get excited. So we have now the governing parameter which must be correct because those curves collapse. And what we now do is we plot again a power law through this data to find the C and N values. Now for this given case both parameters apparently have the same value which brings us to the fourth metric moment if certain parameters clearly relate then there must be something correct in what we do. And now we can make predictions so let us draw the paper on the assessment of the initial velocity effect in describing the distance of an object traveling along a straight line. However, based on that paper someone else performs similar test but measures displacement for a long duration of time. Looking at that data it seems the initial velocity effect that we identified is vanishing one time increases. Hence predictions with our equations don't work well in the large time range. There appears to be a limitation to the range of our equation. How do we deal with that? Well let us come back to the physics of this problem. The problem of an object traveling in a straight line is generally described by the following equations of motion. So let us focus on the equation for the distance and compare it to the equation we just obtained. We are missing a parameter here, the acceleration A. So the original data plotted on the original graph could have been fitted to the equation of motion which put a revealed that the acceleration apparently is point-for-the-art. 5. Meaders per second square. And that is what we missed with our standard approach for fatigue damage growth assessment. So next time when we plot those well-known pairs curves illustrating a stress ratio effect we have to start paying attention to what we do. Instead of getting excited when all these curves collapse when plotting against some sort of effective delta K we should start doing our research. And ask ourselves the question, what is the physics of this problem?" "Advanced Credit Risk Management: Casual Friday","https://www.youtube.com/watch?v=5VTN6XqHvaw","Hi there, welcome to our first casual Friday session and since this casual Friday let's get relaxed. So I will just finally remove this bow tie and I hope you will do the same. Also, enjoy some coffee, some tea, whatever you prefer, I personally suggest it's highly expressive. But you know, that's personal preference. Now, what's the casual Friday? The casual Friday is a nice opportunity for you to ask questions on the course forum and to get my answers in these videos. So, naturally you can also ask questions with the other media we have on the course platform via Twitter. You can also write me emails to my two-deft email address. Every week, together with my assistants, we will pick some of the questions you can understand. I cannot answer all questions in a video. We will pick the more general questions. We will pick the most asked questions and I will answer video video. All the other questions answers will just be given with posts. So please ask your questions because you see this is just a fake casual Friday. I have no question to answer and I'm also pretty sad about that. So I'm really hoping next week I will have a lot of questions to answer. Don't be ashamed. We can learn a lot of things together. So really use the forum and ask your questions. I hope to see you next week online in this casual Friday. See you next week. Bye-bye." "Working with Stress Concentration Factors","https://www.youtube.com/watch?v=Y3v7jlUWmzA","In a previous video, we looked at the stress concentration factor as a parameter to describe the influence of the geometry on the fatigue-performant low-vase structure. We looked at what causes a stress concentration, I gave you a definition, but that definition related to a norge in an infinite plate, because then mathematically the stress concentration can be well described. But how does that work in practice? Because in practice, we work with finite dimensions of a structure. And in this video I want to spend some time on that. We should realize that the stress concentration factor does not capture all the aspects related to the geometry. We may have two different cases, for example two circular nortures, with different diameters, in plates with a different width, that in the end have the same stress concentration factor, which simply because the size of the highest-thrust region in the norge is different, the fatigue-performant may still be different. So how do we cope with that in practice? Well let's have a look at that. A circular nort in an infinite plate has a stress concentration factor equal to 3 as we have seen before. And now the question is, what if the plate has a finite width? What is that? A stress concentration factor? Well that can be illustrated with a graph here on the right hand side. The stress concentration factor equal to 3 for an infinite plate is reducing once the width of the plate is becoming smaller. And this relationship illustrated here is given by the equation of Haywood, which says that the stress concentration factor of a circular hole in a plate with finite width is given by this equation. In other words, the stress concentration factor reduces if the width becomes smaller. Now that may seem counterintuitive because if the width becomes smaller the nominal stress will increase. So how does that work? Well let me illustrate it with this example. We have an infinite plate with a circle hole and the corresponding stress concentration factors given by KT is 3. Now we can make a distinction between a nominal stress as I have given before in the definition, the peak stress divided by the nominal stress, but we also can define the gross stress concentration factor which is the peak stress divided by the applied stress. Now if I reduce the width of this plate then obviously both the nominal stress and peak stress go up. That means if I divide the peak stress by the increasing nominal stress, the effective stress concentration factor which is the nominal KT is reducing. But if I divide the increasing peak stress by the original applied stress and obviously the gross KT is increasing. So if I plot the Haywood relationship which is this blue curve starting at tree for an infinite plate and reducing once the width becomes smaller then the corresponding KTG would a gross stress concentration factor is increasing and given by this equation. Now you can recalculate one equation into another, simply by the simply by applying the equation given here. Now if this circle and not is an elliptical note then obviously the relationship is similar in trend with different the magnitude. So depending on the long and short axis of this ellipse we start with a different stress concentration factor and then the nominal stress concentration factor will decay once the width becomes smaller and the gross stress concentration factor correspondingly will increase once the width becomes smaller. Now if you look at handbooks or textbooks that give you relationship for this you can immediately recognize that this is a very empirical description. You have all those empirical relationship based on the dimensions of A over B you can find the corresponding constants C1 to C4 and you fill the equation you can calculate the stress concentration factor. Now basically means you are either work with those empirical relations or you simply look it up in a graphical chart as you can find for example in a textbook like Peterson. So let's take a practical example let's do this exercise we have an elliptical note in a finite plate as illustrated here. So the longer short axis of this ellipse are given by A and B the radius in the north root is 5 mm and the plate width is 100 mm. So how do we calculate the stress concentration factor for this one? Pass here the movie and try to find the answer yourself. So what is the best way to go forward? Well we know for this given ellipse in an infinite plate that the KT is equal to 1 plus 2 times the square root of A over Rho which in this particular case is 5. Now for a circle of Hull in an infinite plate we know that the stress concentration factors given by KT is 3. So what we can do as a first order approximation is a shown that for the finite plate dimensions the same ratio of the ellipse over circular KT applies as to the infinite one. Hence with the equation of Haywood I can calculate the stress concentration factor for a circular notch in a plate of 100 mm width assuming that the diameter of the hole is 2 times A. Then Haywood tells me the stress concentration factor is 2.2. So assuming the same ratio of elliptical over circular knots for infinite I can take the ratio of 2.2 divided by 3 times the KT of 5 and that gives me a stress concentration factor of 3.7. Okay this is an approximation but how close are we? So if we look up the chart in Peterson for example and for the given dimensions of this problem and we look up what the stress concentration factor is supposed to be then we are fairly close although this is an approximation. It's a very good approximation. Now we have seen that the stress concentration factor describes the influence of the and is independent of the stress applied however it's not independent of the load case applied. So let us look at the role of the load case and the geometry on the value of the stress concentration factor. Let us first compare these three examples we have a circular notch in a plate width with 100 mm then with the equation of Haywood we can calculate that the corresponding stress concentration factor is 2.25. If we add holes with a smaller diameter on both sides of that notch we effectively reduce the stress concentration factor to 1.8. Obviously if we replace those three holes by an ellipse with a long axis equivalent to the distance between those holes then the stress concentration factor is further reduced. Now here the stress directories are being illustrated because the key thing is that we should identify that the stress concentration factor effectively describes how condensed the stresses locally are at the notch root. So by adding those two additional holes on both sides of the larger notch we effectively push the stress directories outward and we reduce the stress concentration at the naught root of this larger diameter hole. You can see that with the ellipse something similar to obtained and hence the stress concentration factor is further reduced. Another example is this flat plate with two nauties on both sides. So in the naught root the stress concentration factor is 2.55 and we can illustrate with the stress directories that indeed at the naught root on both sides a stress concentration is present. Now if we remove a piece of material and we should redraw again the stress directories we would identify that the density of stresses near the naught root is reducing. So hence removing this part of the material we effectively bring down the stress concentration factor. If we then also remove material further upstream then we can even bring down the stress concentration further simply because less stress is concentrated at the naught root. So we can play with the geometry in order to change the value of this stress concentration factor. But now let's assume we have a similar notch geometry and we change the load case. So we have the sheet intention with the notches on both sides then the stress concentration factor is given by 2.55. If I now have a rod with a diameter equal to the width of that plate and I have a groove in the circumference which has the same radius as the notch in the plate. What I will find is that in case I apply tension the stress concentration factor is less than 2.55 actually it's 2.23. If I take now the same rod and instead of tension I apply bending the stress concentration goes down further. How does that work? So I don't change the geometry but I change the sheet to a rod. I change the stress concentration factor. If I change tension to bending on the rod I further change the stress concentration factor. What you should try to do for yourself is illustrate the stress trajectories and you will identify that indeed there will be different. In any case if we change the load case even though we are talking about the same geometry the stress concentration factor will be different. So if we have a fillet here with a fillet radius we will have two different curves to illustrate the effect of tension and bending on the effective stress concentration factor changing the load case changes KT. The Haywood equation for open hole describes that the KT of 3 DK is to a lower value if the width becomes smaller. But if I now apply a pin and I load the hole then the stress concentration factor will be higher simply because the load that I introduce by bearing is passing by near the notch root and increasing the stress concentration factor significantly. The same diameter is same width different load conditions it gives a different stress concentration factor. Let's take this example we apply bearing pressure in the circumference to the flans which is an equilibrium with a tensile force on this side and then depending on the radius we may change this stress concentration factor. But we can also change this stress concentration factor by keeping the radius the same but increasing the thickness of the flans we can even half it. Obviously if this would be purely tension loading then the stress concentration factor will be reduced further what it illustrates is that the load case together with the geometry to which we apply is load case defines the stress concentration factor. So as a designer we have freedom there." "Spacecraft Technology: Onboard Computing","https://www.youtube.com/watch?v=ukX5_ICh-Zg","Welcome back. In this session we will discuss onboard computing. I will explain you the differences between devices and provide a few examples. Processors are around since the DOM of the personal computer. The focus of processors is on computation. Processors typically require peripheral integrated circuits to be able to function. They need for instance work memory and data bus controllers. Processors are the fastest for generic computation. Which generic are mean that it serves a range of applications and not just one specific. It can for instance calculate the outcome of an equation, but it can also be used for logic operations like if this condition is met, then do this and else do that. Both calculations and logic operations are typical functions of an onboard computer. The typical average power consumption of processors is between 10 and 150 watts. In the picture you see a 386 processor. Although this processor was introduced in 1985, a radiation hot and ferrion is still being used for the command computers in the international space station. Flight heritage, decades of mission design and the radiation environment are typical reasons why such ancient devices can still be seen in currently operational, large and expensive spacecraft. Microcontrollers focus on embedded systems. The difference is that they typically have less computational power than processors. However, they have integrated memory and other peripheral functions. Think of analog to digital converters, pulse width modulators, data bus controllers, etc. Also their average power consumption is much less than processors typically well below one watt. There are nowadays even microcontrollers which consumes less than one micro watt. Comparing state of the art processes with state of the art microcontrollers is comparing apples with pairs. A state of the art microcontroller, however, is more computational capacity than the 386 processor which was shown before and it does so at several orders of magnitude less power. Since the need for computational power for many spacecraft functions has not grown as much as processor speeds, there is a trend in the space sector to move to what microcontrollers, especially for smaller spacecraft. A few programmable gate array or FPGA is an integrated circuit with a reprogrammable logic. You can design your integrated circuits in a programming language. It is not the easiest language but the overall process is easier and more cost effective than designing regular integrated circuits. You can even buy or download so-called intellectual property costs which are building blocks for FPGA's with specific functions or even complete microcontrollers. FPGA's are relatively fast and low power for specific functions. However, an FPGA IP core of a microcontroller consumes more than a regular microcontroller. The European Space Agency has developed the Leon micro-processor family. This is an IP core which can be integrated in FPGA's. It can be combined with controllers of for instance the space wire bus. It has many failure tolerant logic inside such as redundancy of critical functions as well as intensive error detection and correction mechanisms. This micro-processor is becoming very popular at the moment for modern spacecraft. Now let's take a look into the future of onboard computing. We can go one step further than FPGA's with application specific integrated circuits or ASICS. An ASIC is a complete integrated hardware solution for a specific application. It is basically designing a new chip. However, you can also use IP cores for specific functions similar to FPGA's. However, the circuit structure can be optimized to be as fast and power efficient as possible. The disadvantage at this moment is that it takes more time and cost you hundreds of thousands of euros to produce an ASIC. I do expect however that in the next 10 years, the Fises will hit the market which can produce ASIC's fast and cost effective just like you can now let your printer circuit board be produced for less than 100 euros. ASICs with many different integrated functions combined are so called system on chip or SOC. However, SOCs can also integrate generic applications like a microcontroller. SOCs can even go once a further and integrate sensors or micro-electro mechanical systems within the chip package. You can therefore see them as the smallest complete systems. Maybe this one they may lead to a complete satellite on a chip. Now we take a look at a few examples of the onboard computer of several spacecraft. We already discussed the international space session. It has been around for a while, but it still uses the very old radiation hardened 386 processors. The spirit and opportunity MOSROW first of NASA use a radiation hardened BAE processor of IBM. This processor is radiation hardened. It can run up to 25 mAh and cost about 200,000 euros. Our own Dell Fucetri and Dell FInex satellites use an MSB430 microcontroller of Texas Instruments running at 8 mAh and costing only about 2 euros. It is however a microcontroller intended for terrestrial applications. The GOCHI satellite, which might be familiar if you follow the course on a net observation, uses a radiation tolerant ERC32 processor from up-mL. This processor is discontinued and evolved in the Leon micro-pressetsor family of ESA. The juice satellite going to Jupiter's moons will use a Leon 2 microcontroller while the ADIAN is 6 launch vehicle will probably use Leon 3 microcontroller. What is what noting is that ESA already released the Leon 4 processor in 2010, while both juice and ADIAN 6 are planned for a launch in 2020. You can learn from this that the space industry is rather conservative and lacking behind the state of the art once it concerns major missions. There are several reasons for that, but most important one is that you cannot replace a faulty device once launched like you can do for terrestrial equipment. Also, the cost of a spacecraft is all this of magnitude higher than for instance your mobile phone or car. Last but not least, the launch in the space environment is harsh. In the next session, we will learn more on what is harsh environment and the lack of maintenance can do to your spacecraft equipment. Also, I will provide you some examples on how some of these failures can be isolated and recovered. See you back after the exercises." "Introduction to Spaceflight: Getting to space","https://www.youtube.com/watch?v=y08SV31ub6M","Hi, welcome to the start of the journey into space. A journey that always begins with a rocket launch. The first thing we're going to do is take a look at how rocket propulsion works. A key question is, how big does the rocket have to be to get us into space? A more direct way of posing this question is, how much rocket fuel or propellant do we need to accelerate the rocket to a sufficient velocity? But before we can tackle the question of how much propellant we need, we have to understand what velocity sufficient to put a spacecraft into orbit. So we're going to take a look at that first. Imagine for a moment that we've been given the task of throwing a ball-sized spacecraft into space. In general, someone is going to pay us to get this ball into space. So we're going to call it a payload, of course. Since we're throwing it, all we can do is give it an initial vertical velocity at ground level. Given this velocity, how high is the ball going to go? In other words, what altitude will it achieve? We're going to make a very basic estimate of the initial speed using the principle of conservation of energy. The velocity we give the ball at ground level, the kinetic energy, will be converted into potential energy as the payload gains altitude. At ground level, we define the altitude or height h, and therefore the potential energy to be zero. So the total initial energy is 1.5 MV squared. When the payload achieves maximum height, it'll have zero velocity, and its potential energy at that point will be MGH. Now we solve for h to get an expression which tells us the altitude achieved for any initial vertical velocity. Now I'm sure you're very strong. So imagine you throw this ball upwards with an initial speed of 1.4 km per second. Given that the gravitational acceleration is 9.8 meters per second squared, we apply the formula, and discover that the ball achieves a height of 100 km. Now that's not too bad. As that's where the atmosphere ends and space begins. We generally talk about being in space at about 130 km. Beyond that point, the atmosphere is thin enough that we can sustain an orbit for a reasonable period of time. Keep in mind that this is a very rough estimate. We've ignored a number of effects, including air resistance, and the change of gravity with altitude, for example. So let's aim a bit higher. We'll use the conservation of energy again. But this time, we're going to figure out what velocity we need to achieve a given altitude. This time we solve for V. Now we want to aim for a good healthy altitude, at which a standing an orbit is a reasonable proposition, say 300 km. We'll take a value for G again of 9.8 meters per second squared. Plugging these values into the equation gives us a velocity of 2.4 km per second. Now it's important to re-emphasize that this is also a very rough estimate. Once again, we've neglected air resistance, the change of the gravitational force without the altitude and so forth. So now we've got our ball-sized satellite at an altitude of 300 km, and it's got zero velocity. That's fine for some applications. We could take a few photographs or make some other measurements, but pretty soon our payload is going to fall back towards Earth. So with the velocity of 2.4 km per second, we've created a ballistic trajectory, not an orbit. If this kind of speed is not enough to keep our payload in orbit, how much speed do we need? The diagram here is not at all to scale. We want to achieve a repeating trajectory, an orbit. Let's aim for a circular orbit for our satellite, which has a massive M, and it will have a circular velocity of V. To figure this out, we have to elicit the help of Surah's Ignuten. Nuten's second law states that the sum of the forces on an object is equal to its mass times its acceleration. This is normally a vector equation, but we're going to only analyze it in the radial direction. So we can suffice with the scalar equation shown here. That's okay for now, because the only force we're going to consider is the gravitational force. And in this simple model, that force is along the radius in the negative direction. Indeed, we're going to simplify things even further for now, and assume the gravitational force is just Mg. Now we apply Nuten's equation by filling in the only force Mg on the left-hand side of the equation. On the right-hand side, we replace the acceleration A with the squared over R, which is the radial acceleration for an object moving on a circular path. Now be careful here. R is the length of the radius from the center of mass of the Earth to the center of mass of our satellite. It is not the altitude. So R is the Earth's radius plus the altitude H. Before we apply this equation, keep in mind that this is a very rough estimate again. We've neglected a host of other forces and effects, and we'll discuss how to deal with those later. Nevertheless, this will give us a decent first approximation of the velocity that we need. So the velocity for a circular orbit can be estimated by applying F is equal to MA. And we'll keep in mind that R is the length of the radius to the satellite, not the altitude. We divide both sides of the equation by the mass M, and solve for V. Now we've got what we need to make a rough first approximation of the velocity required to maintain a circular orbit at a particular altitude. So let's consider an altitude of 300 kilometers again, which is equivalent to a radius of about 6,700 kilometers. Using 9.8 meters per second squared for G, we find that we need a velocity of 8.1 kilometers per second to stay in a circular orbit. That's quite a bit more than the 2.4 kilometers per second that we needed merely to reach this altitude. Fine. So do we need to launch our satellite at the speed? Well, not quite. On the one hand, it's important to note that we need a far greater velocity to maintain a circular orbit at this altitude than we did just to reach it. On the other hand, we get a little help from the earth. This is because our spot on the ground, our launch site, is moving due to the earth rotation. If we assume our launch site is at the equator, then the ground, and our satellite with it, is already moving at almost half a kilometer per second. That means we only have to add about 7.6 kilometers per second to achieve the required velocity, for a circular orbit at 300 kilometers altitude. This required change in velocities called the delta V. Once again, this delta V is the first Ruffer approximation. The actual delta V required will be higher. We have to overcome a number of forces, such as air resistance and gravity, among other things. Now that we have a feel for the velocity required to maintain a circular orbit, we need a rocket to accelerate our payload. The next step is to take a look at how rocket's work." "Low Default Portfolios - Advanced Credit Risk Management Course (Sample Video)","https://www.youtube.com/watch?v=MRMdvZZFNHE","Hi there, welcome back. Now one of the most frustrating situations when we are dealing with the estimation of the probability of default of a counterpart is the situation in which the data we have to estimate this probability of default contain no or almost no default. This is the typical case of low default portfolios. That is to say, portfolios in which given the different types of counterpart is we are considering the number of defaults we have observed so far is negligible and this actually creates problems in the reliability of the estimates we get for the probability of default of the different phantom parties. Because you can imagine that we cannot really assume that since we have observed no default so far for a given type of counterparty, then the probability of that counterparty is actually zero. That will be a tremendous error. This is something that we want to avoid. Now I have some good news and some bad news for you. Let's start with the good ones. The good ones is that actually there are metodologies that we can use, more or less, juristical metodologies that come from the use, for example of expert judgments and decision statistics but also more simple consideration. And they give us ways of estimating the probability of default of a counterparty in the case of no or almost no historical default. These metodologies can be combined with the other things we have since so far with the other methods we have seen for estimating the PD of a group of counterparties. It can also be combined with methods that you use every day in your business, life, life for example, logistic regression or probability regression so the different type of linear models you may want to use. So we can combine these possibilities of estimating the probability of default of a counterparty when we don't have data about the default of that counterparty with the typical estimation methods we have since so far. The bad news is that unfortunately there is no unique solution so typically we may have quite different solution depending on the method we choose and the best thing we can do is really to estimate these PDs according to the different methods and then try to make our evaluations by comparing them. The simplest at the point is to assume that number of defaults for our counterparties follows a binomial distribution. The idea is that we consider an counterparties that are homogeneous in terms of probability of default and the number of defaults we observe over time is just the result of an independent Bernoulli trials where small p is the probability of default for each single counterparty which is the same because we are a human homogeneity for the counterparties. Now the probability must function of binomial is something that for sure you know it's one of the basic distributions for discrete random variables and you see this probability must function on your screen. There we have exactly the probability of observing PDs over and companies where p is exactly the probability of default for each single company. Now the point is that we can use maximum likelihood to estimate these small p and it's easy to show that the maximum likelihood estimator for p is exactly k over n. Now what's the point of this estimator? Consider the case in which k is actually zero that is to say we have observed no defaults so far. Then it is obvious it is immediate that p the estimate of p would be zero as well because p is estimated as k over n. So zero divided by n is zero. Now you understand this is a problem because this simple estimator tells us that since we have observed no default so far then the probability of default is actually zero and we know that this can be quite dangerous. The first estimator that we can use to overcome this situation induced by the simple Emily estimator is to assign some probability to the event of observing zero defaults and then to derive the probability of default from that. In other terms we have our probability mass function and we assume to observe zero default so k equal to zero. We can assign a probability c to that event say zero point five and then we can solve for small p that is the probability of default we are interested in. In that case the estimator will be one minus zero point five to the power of one over n. This is the specific case in which we assume that the probability of observing zero defaults is exactly zero point five. In the case in which we assume other probabilities actually our estimator will change. This estimator is an estimator that paradoxically despite its simplicity is quite used in the explosive industry. We can then use the big family of Bayesian estimators and here we can really produce a plethora of different estimators if we change our prior belief. Now let's start from the simplest case and then you will find more details on the course platform. Let's assume for example that for the zero default case so that is to say when k is zero we have it so no default. Then we put a prior on our parameter small p the probability of default of the uniform type and then we have that the mean base estimator for the probability of default is simply one over n plus two. The derivation of this estimator is on the course platform with some more details. Another estimator which is called the upper bound estimator is used by those scholars according to which it is not logical that the p we can obtain in the case of zero defaults may be larger than the one we can actually observe in the case of one default. In that case what we can call really the upper bound because these value we are using will be an upper bound for our probability it's simply one over n so very simple. Another possibility matter is the so called confidence interval estimator which relies on the basics of hypothesis testing. Given that we have observed zero defaults we look for the largest p hat we would fail to reject as an acceptable estimate of p. This is done by the usual z standardization and solving for p. Z alpha is the standard normal quantile associated with a significance level alpha. The limitation of this method is due to the fact that the standard normal approximation typically works for a product of n and p greater than equal to 6 is in a common rule of thumb. In low default portfolios and is usually small therefore the approximation may fail. You see it is not that easy to deal with this problem of no observations. What can we say if we compare these first estimators from a qualitative point of view the four estimators show similar performances the confidence interval estimator a four is usually the most conservative. Please notice that in the plot the x-axis scale is not respected. For the simple estimator a one we have taken the probability of observing zero defaults to be 0.5 50%. Changing this quantity for example on the basis of experts judgments leads to different estimates for example for n equal to four if we set c equal to 0.1 then p hat equal to 0.44 for c equal to 0.8 p hat is 0.05. Other estimators you may want to have a look at can be derived from the rule of three quite used in biostatistics and clinical trials or estimators based on the common persona approximation of the binomial distribution and refinements given by the mini max criteria. Let's look at the papers I am uploading on the course platform. Many other estimators can be proposed and it is almost impossible to consider all the possible estimators in a class, unposting x-romaterials on the course platform that I hope you may find interesting. The next class we will focus our attention a little bit more on Bayesian methods but obviously I am here at your disposal if you have further questions. So again please use the course forum please ask questions for the casual Friday session. Goodbye." "Spacecraft Technology: Command and Data Handling - Introduction","https://www.youtube.com/watch?v=PIu6Lg58O6I","Welcome to the first lecture of spacecraft technology. In the first part, we will discuss the topic of command and data handling. We will start with the brief session which introduces this topic. The function of a command and data handling subsystem is to perform onboard operations and internal communication. This internal communication is used for commands and data between spacecraft subsystems. In onboard operation nowadays is performed by software which manages the entire spacecraft in an autonomous manner. This software also handles the data to be downloaded and commands from operators on the ground. The name, command and data handling, is a legacy of the past in which many satellite functions were still residing in analog circuits. With the shift towards the digital domain, the term does not fully cover the topic anymore. There is, however, not a good alternative to a available which fully describes the subsystem. An analogy which might give you a somewhat better understanding, you can regard the subsystem as the brains and nerves of the spacecraft. But for the best understanding, we better look at the potential command and data handling architecture. Here, you see an example of an architecture of command and data handling subsystem of the spacecraft. At the center, we have the onboard computer abbreviated as OBC. The software of the OBC is in charge of onboard operations. It has a strong link with the electrical power subsystem or EPS. First of all, the EPS can tell the onboard computer how much power is available and consumed. This can be vital information for onboard operations. The OBC can for instance decide to turn off an uncritical subsystem in case there is too little power available. Secondly, the OBC tells the EPS which subsystem should be turned on and which should be turned off. The communication typically goes through a low speed data link. The OBC receives commands from operators on the ground via the radio receiver. The OBC also sends packets of housekeeping data to a low speed radio transmitter, such as that operators on the ground can monitor the spacecraft health. For very small satellites, this low speed radio is sometimes the only one and housekeeping data and payload data is combined. For most large satellites, however, the payload delivers a fast amount of data. The payload delivers this data via a high speed data link to a dedicated storage system. In the satellite passes a ground station, the onboard computer then commands a high speed radio transmitter to downing the data which is retrieved from the data storage over another high speed data link. Through this way, the OBC does not need to process this high data traffic itself and it can keep its internal resources dedicated for time critical operations. It should be no surprise that the OBC also communicates to payload and all other subsystems. This is two retrieves information on their health as well as to command them to perform actions according to the operational scheme or critical interventions. Most large satellites have a dedicated command controller. In the nominal case, the command decoder has a data link with the onboard computer. It is used to change software parameters to change the operational modes or to even provide a complete revision of the onboard software. However, in some failure cases, the OBC cannot receive the commands or cannot communicate with some other subsystems through the nominal data links. Many satellites have therefore implemented an alternative route called high priority commanding. High priority commands are only for a limited set of critical functions. This prevents switching a subsystem on or off or resetting it to the full configuration. This route typically uses only analog components which are less prone to radiation effects. These effects will be explained in a later session. High priority commanding works with the sequence of pulses or tones and links directly to the subsystem for instance the electrical power subsystem. These ends the introduction session. In the next session we will look at data links, onboard computers and typical satellite failure cases and how the command and data handling subsystem can deal with them. Before watching those videos, I recommend you to do a few exercises first. Good luck!" "Design Against Stress Concentrations","https://www.youtube.com/watch?v=7xjKYLFa3Vk","So what are the steps we take in detail design against stress concentration factors? First, we have to assure that all sharp notches and corners are rounded on drawings. Sharp notches mean by definition high stress concentration factors. Second, we have to maximize fillet on band-radie as much as possible. The larger the radius, the lower the stress concentration. The fillets illustrated here on the right-hand side illustrate that the improvements can be made in several ways. One may try first to increase the radius, but here one should realise that the fillet doesn't need to have the full quarter of a circle. One may take only a piece of it, leaving the sharp corner at the top. The lower two cases illustrated are filledet created within ellipse, reduces stress concentration further. Now, in practice, a sharp corner illustrated here on the left can be made smooth by adding a slope on the 45 degree angle. If we then add another slope half of that angle, starting halfway from the first slope and repeating that several times when automatically creates a smooth transition with a low stress concentration factor. The third step is to avoid squared holes. A lips are better, and if necessary one may try rounded squared holes, but then take the radius as large as possible. Keep in mind that the windows of the aerial direction finer in the common aircraft were windows were too squared. For, avoid feathered edges. The local load introduction by both are transferred into the component, which in combination with the radii imposes a big stress concentration factor. Instead, one should consider a continuous ring avoiding this radii. 5. Avoid superposition of stress concentrations. As we have seen in a previous video, stress concentration factor rapidly increased when notches are superimposed. So, for the example of the look, the drainage holes should be located as far away as possible from the stress concentration. Here three solutions are given, which all have their advantages and disadvantages. Either way, they should be preferred over the one with the drainage hole left or right. Depending on the location one may consider beefing up the look geometry, simply to reduce the stress of somewhat further. Last, pay attention to the surface. Rough surfaces or surface markings created by machine, for example, easily increased stress concentration factors resulting in fatigue recognition. To the term, stress concentration factors in active components, one may want to measure the strain gradient in order to estimate the KT. There are different methods to capture the stresses or stress fields. And any of these cases one should consider that the peaks at the north route is hard to determine. In particular, if local measurements are performed with for example strain gauges. If one takes two or three strain gauges, one may approximate the gradient and extrapolate it towards the north edge to estimate KT. But even with strain field measurements, stress gradient extrapolation towards yeses are required. Similarly, one has to consider that finite element analyses often gives stresses which are calculated by interpolation within an element, which means that it does not represent the peaks at the north route surface itself. For this reason, accurate stress concentration factor determination requires very fine measures. Last but not least, because fatigue generally implies crack initiation and propagation perpendicular to principal stresses. It is highly recommended to determine stress concentration factors using these stresses rather than, for example, for means stresses. In all cases, comparison with approximations or handbook solutions is strongly advised. If no comparative cases are at hand, one may compare to approximations using the earlier discussed superposition principles." "Aircraft Performance Course: En Route Climb Performance","https://www.youtube.com/watch?v=4MaenP34StI","Hello, in the Amwood Climb phase from about 1500 to 3000 feet altitude up to the cruise altitude, the aircraft is flying either at a constant indicated airspeed or at a constant Mach number. As a result of the changing air density and air temperature with altitude, this results in an unsteady climbing flight, which is quasi rectilinear. In other words, the flight path angle is fairly constant. In order to calculate the climb performance, we need the equations of motion for this specific condition. Now, to start with the general equations of motion for symmetric flight. By setting the change of flight path angle equal to zero, these equations simplify. For normal operations, we can also make the small angle approximation and the assume that the thrust angle of attack is aligned with the airspeed vector. Hence, the two co-sign terms become one and the sign of the thrust angle of attack as well. All together is results in a simplified set of equations. One equation that contains the climb angle as a variable and the second equation which states that lift must equal weight. Now, we are interested in the climb performance in terms of the rate of climb. Let us look the two equations of motion that we have over here, and as you can see, there is no rate of climb in the equation. So, I'm going to introduce it. And I do that by multiplying equation number one with airspeed. And if I do so, I get the following. I get mass times velocity times the change of speed with time and that must equal thrust times velocity minus drag times velocity minus weight times v sine gamma. And if you pay attention, you see a couple of terms that may be familiar. First of all, thrust times velocity equals power available from the engines to fly from A to B. Drag multiplied with velocity as the power required to overcome the aerodynamic drag. And if we look at the airspeed vector as it is defined, then we can say that v times the sine of the flight path angle equals the rate of climb. So, if I insert that in this equation, I get weight over g times v times the acceleration equals power available minus power required minus weight times the rate of climb. Now, if I bring all the powers and the weight to the left hand side of the equation, I get power available minus power required. And if I divide by the airplane weight, I get that this must equal the rate of climb plus the airspeed divided by the gravitational acceleration times the acceleration of the airplane. So, this says that power available minus power required, which is the excess power per unit aircraft weight equals the rate of climb plus an acceleration term. So, this is our result called the power equation. Now, what is the physical meaning of this equation? So, this is our result called the power equation. But what is the physical meaning of this equation? The power available is the useful energy per second we put into the system. We get this energy simply by burning fuel. Not all the energy in the fuel is transformed into power available. Part of it is lost in for example heat into the atmosphere. And the power required is the energy per second that is required to overcome aerodynamic drag. Let me see over here. Combined, this is the excess power or energy per second. Rate of climb is the change of altitude with time. Now, realize that potential energy is altitude multiplied with weight. So, the rate of climb reflects the change in potential energy with time. Furthermore, the acceleration reflects the change of kinetic energy with time. Simurizing the energy we have in excess is used to change the potential energy and the kinetic energy of the aircraft. And this is essentially the law of conservation of energy. Now that we understand the power equation, let's use it to calculate climb performance. For the example, I will take a climb at constant indicated air speed. So, over here, I have our basic power equation which tells us that excess power per unit aircraft weight is used to climb with the aircraft or to accelerate the airplane. Now, this is the general situation. And I'm going to rewrite this a bit. If the airplane would be in steady flight conditions, then the equation would look as follows. Excess power per unit aircraft weight is rate of climb plus zero. So, we could call this in steady conditions the rate of climb that can be achieved in a steady situation. I call this equation one. I call this equation two. And of course, the rate of climb here is the rate of climb that can be achieved in either an accelerating or decelerating flight. Now, if I combine equations one and two, I can replace the power terms by rate of climb in steady flight. And that should be equal to the real rate of climb plus this acceleration term. Now, it would be nice to have since we're in a climbing flight and set of a time in the equation. The DT, it would be nice to have altitude there. So, what we can do is we can rewrite it slightly by saying that the change of velocity with time is in fact, change of velocity with altitude multiplied with the change of altitude with time. And that is allowed because the change of altitude over the change of altitude is equal to 1. So, I'm essentially multiplying this equation for velocity with time with 1. And since the change of altitude with time is rate of climb, actually the acceleration term is rate of climb times change of air speed with altitude. Now, let's insert that in our third equation. So, I take equation 4 here, put it in the third one. And what I get then is the following. I get rate of climb in steady conditions is the real rate of climb plus V over G times DV the H times the rate of climb which in fact is equal to 1 plus V over G the V the H times the rate of climb. Now, I could compute the ratio between the real rate of climb and the rate of climb that is attainable in a steady situation. And that will then be 1 divided by 1 plus some acceleration term DV the H times velocity over gravitational acceleration. So, this is what we will take. And if we now assume that we're flying at the constant indicated air speed which in incompressible situation is almost the same as equivalent air speed, then we can take the air speed over here and rewrite it a bit because of course, that is the true air speed you have and the true air speed let's highlight it by TAS is of course related to equivalent air speed through a ratio of air densities. So, you know that the true air speed is much higher than the equivalent air speed. So, basically what we get over here is we can multiply this with square root of rho 0 over rho. And that means that the DV DH term can be written as follows DV ES equivalent air speed times the square root of the air densities the H and that's nice because equivalent air speed is constant. So, we can say that this is the equivalent air speed times the change of air density basically or the ratio of air density with altitude. So, we see that the rate of climb during the on route climb phase compared to the rate of climb that can be achieved in steady conditions purely depends on the change of air density with altitude and the air speed selected by the pilot. Now, with knowledge of the atmosphere this can be solved. Take for example a flight at 10,000 meters altitude. If we would look up the tables for the international standard atmosphere like printed in this book we can find that the air density at 10,000 meters is 0.4127. 100 meters higher it is 0.4076 and at sea level it equals 1.225 kilograms per cubic meter. Therefore, the change of air density with altitude becomes this ratio. You could also solve this in a more elegant way by using the equations representing the international standard atmosphere. I have used this result to calculate the ratio of rate of climb over the rate of climb in steady flight for an example scenario of an aircraft at 10,000 meter with an indicated or equivalent air speed of 140 meters per second which results in a Mach number of 0.8 at that specific altitude. This scenario, the rate of climb is only 73% of the rate of climb that can be achieved in steady flight conditions. Quite a significant difference if you ask me. Part of the energy from the fuel is not used to increase altitude but to increase air speed or kinetic energy. To conclude the aircraft is either accelerating or decelerating in the onward climb and in other words the onward climb is unsteady. As a result the rate of climb is quite different from the rate of climb that can be achieved in a steady flight condition with constant true air speed. Epnospheric data is required to calculate the difference between the actual rate of climb and the achievable rate of climb in steady conditions. I recommend to try these calculations yourself now." "Aircraft Performance Course: Interview with a Pilot about Take-off Maneuver","https://www.youtube.com/watch?v=sn9XTpwbT7k","Hello Hans, welcome to the studio. Today we will talk about the Take-Off maneuver. Now you are a flight test pilot on a Session of Citation aircraft. And I would like to know that the Take-Off maneuver is quite a dynamic maneuver, which takes a short time in which you have to perform a lot of actions. If we assume that we're in a calm day, so no wind or other disturbing factors, could you explain which actions you actually take during a take-off? Of course. We have the aircraft, the aircraft is lined up on the runway, and we first have to wait for the call from air traffic control that we have a clear for take-off. That's really important, of course. Then we push the throttle to the take-off thrust. The aircraft will accelerate. We accelerate to the decision speed, and the flight is on and the flight is on. We rotate the aircraft, the aircraft pitch up, and then we climb further. And of course, the aircraft before take-off is configured for the take-off. So the flight setting is in take-off setting, and just after take-off, we retract the gear and we retract the flaps. Okay. Okay. And as I understand, this decision speed is very important during the take-off. We call it out in the cockpit, I understand. Can you tell us a bit more about the decision speed? What is it? And why do we have to use it? Well, the decision speed is when you make the decision to either fly with a problem that you encounter, or stay on the ground, with that problem. And that means that it depends on all the standard operating procedures of the airline, but they are roughly the same for all of them. Is that up to a certain speed in our case, 17 knots. You stop for all a problem, for example, a bird strike, or any light on your anensity at the panel, between 70 and the decision speed, you will only stop for the bigger problems, like an engine failure, and after the decision speed, whether you have an engine failure or an engine failure, you're still going to take-off. So that decision speed is important. So this, that means that the aircraft is fully capable of climbing out even with an engine on fire, or a field engine? Absolutely. Okay, well, that's nice to hear, of course. But it's also surprising, maybe that with an engine failure, you're still going to take-off, even if you have a lot of runway in front of you. Yeah, that's true, because there's so much energy in the aircraft, that if you would break, then the brakes will get really hot, and then you can blow your tires, and of course, the end of the runway is near. Right, and it's difficult for you to compute, almost, how much distance you would require. We count, that's the reason why you compute a decision speed beforehand before you go to take-off. Okay, and I guess this only holds for a multi-engine aircraft, in the case of a single-engine aircraft, how would you then do this? And if the engine would stop, then you glide towards a spot where you can put the aircraft. Okay, clear. And one factor you mention is that you rotate the aircraft during the take-off. Is there any variation in piloting technique? Well, the SOPs say you gently rotate the aircraft towards about 15 degrees, which up in my case, that will result in 150 knots indicated airspeed. But of course, if you are really slow rotating, then of course, you will use a lot of more runway for take-off. Okay, clear. And one thing I said at the start is that we're assuming a calm day, no wind, but I noticed that aircraft always take-off with headwind. Why is that actually the case? Well, you always take-off with headwind because then the runway you will use during take-off is less. And of course, you will climb a little bit better compared to the ground. Okay, okay. And what you may also encounter during take-off is crosswind. How does that affect your take-off maneuver? Well, at first, when you're speeding up, you're aligned with the runway, but let's assume that the crosswind comes from the left in my case, then you will rotate and soon when the aircraft leaves the ground, so the wheels are not on the ground anymore. You will let the aircraft rotate into the wind because then you're flying coordinated, and that means that you're still following the path of the runway, but you have less drag. If you're not flying coordinated, you have more drag and that results in decreased climb performance. Okay, so it's merely for the climb performance you will let the nose of the aircraft go into the wind. Yeah, you always aim for coordinated flights, so. And what do you have to do anything with the pedals in that situation? In that situation, you need to do that because when you are on the ground, you have, in this case, you have a little bit of right, rather in the aircraft to keep the nose aligned with the runway, but of course you center the pedals as soon as you left the ground. Okay, well, thanks for this nice explanation of the whole takeoff maneuver. Okay." "Aircraft Performance Course: Turning Performance - Maximum Load Factor","https://www.youtube.com/watch?v=0kIGywtYsBY","Hello, maximum turning performance in case we are considering horizontal steady coordinated turns can be expressed in the time required to turn or the radius required to turn. As you can imagine, the bank angle plays an important role in this. The steeper an aircraft is banked at a specific airspeed, the smaller the turn radius and time to turn will be. The terms of maximum turning performance is therefore important, first to solve the problem of the steepest turn. The steepness of a turn is defined by the bank angle and therefore also by the load factor. So how do you determine the maximum load factor that can be achieved? For this, the performance diagram can be used. It shows maximum thrust available and the aerodynamic drag in symmetric flight. However, in a turn, the pilot will have to increase the angle of attack and thereby the lift and drag coefficient. So the basic performance diagram is not valid for turning flight. Obviously, engine performance is not affected by the bank angle of the aircraft. The aerodynamic drag as a function of air speed is affected on the other hand as well. Let's see how drag as a function of air speed change in a turn. To do this, I will highlight one specific point in the drag curve. It is associated with a specific angle of attack and therefore to a specific lift coefficient and drag coefficient. Now, assume the angle of attack is kept constant. Lift coefficient and drag coefficient are fixed and we determine what happens to air speed and drag. Let me show you how that works. So if we start with the load factor equation, load factor is by definition lift divided by weight. And lift can be expressed as CL times a half rho v squared s. Now in the diagram, we saw air speed on the x axis. So let's rewrite this equation and sing aloud air speed. If you do that, you get that air speed is the square root of n times weight over s times 2 over rho times 1 over CL. And this is equal to the square root of n times the square root of weight over s times 2 over rho times 1 over CL. And remember that this term is the air speed in horizontal symmetric flight. So this is what happens to the air speed. And now let's have a look at the aerodynamic drag. Now drag is by definition cd times half rho v squared s. And v squared, of course we know what the air speed is. So it's cd times half rho times nw over s 2 over rho and 1 over CL. And these terms of air density nicely cancel out. We have a half here and 2. And of course I forgot to write down the s over here. So the s cancels out as well. And what we end up with is the following. So we see drag is cd over CL times load factor times nw. So we see the air speed in the diagram. So we see the air speed as well. And what we end up with is the following. So we see drag is cd over CL times load factor times the weight. So in fact cd and cl are only a function of the angle of the deck. So we kept those constant. So essentially if drag increases with load factor, then it's proportional to the load factor. This the air speed must increase proportional to the square root of the load factor. To maintain the vertical force equilibrium in case angle of the deck is kept constant. The drag, however, will increase proportional to the load factor for a fixed angle of the deck. So this point over here is going to move upwards and to the right. And you could do that exercise for this complete graph. As a result, the whole drag curve shifts upwards and to the right. Two interesting things can be observed. The minimum air speed all the way over there is increased and the stall limit is encountered at the higher speed because part of the lift is used for turning and not to balance weight. And this must be solved by increasing lift by an increase in dynamic pressure. At the same time, it is not possible anymore to fly at certain air speeds since the drag is more than the available thrust, as you see over here. So there are two factors limiting the term performance. The aerodynamic limit and the propulsion system. In principle, my objective is to determine the maximum load factor at each air speed. Let's say we calculated the performance diagram for various load factors. And you can see the different drag curves over here. Now let's go through the air speed range starting with the lowest air speed in horizontal flight. The stall limit is encountered at a load factor of 1. Hence the maximum achievable load factor is 1. At the slightly higher air speed the stall limit is encountered at n is 1.5 in this specific example. So the maximum achievable load factor is 1.5. If we go to an either higher air speed, the thrust limit is encountered before the aerodynamic limit. In this case, the maximum achievable load factor is 2. Now we can go through this complete diagram to have all combinations of maximum load factor and maximum air speed. You can see there is one air speed which gives the highest load factor for the complete air speed range. Typically this occurs near the minimum drag condition, where the ratio of seal over CD is maximum. Concluding the maximum achievable load factor as a functional air speed can be determined quickly based on the performance diagram. It essentially depends on the aerodynamic characteristics of the aircraft. The propulsion system characteristics and finally also two factors which we consider constant for the moment. The air density or altitude and the aircraft weight." "Aircraft Performance Course: Why Use Simulation?","https://www.youtube.com/watch?v=HMtJE_1amY8","Hello, Flight Simulation. Why do we need it? Before I address this important question, let me first define what Flight Simulation means in the context of aircraft performance. When hearing the word Flight Simulation, you may think of Flight Simulators used for training pilots to perform scientific research, as is done with this research simulator, available at the Health University of Technology, or simply to have fun. In all these types of simulation, there is somebody representing the pilot, providing input to software through an interface such as a joystick to a computer which calculates the motion of the aircraft, and the computer visualizes this motion and in case of motion based simulators drives the actual motion. In these pilot simulations, the aircraft is represented as a rigid body with inertia, resulting in six equations of motion. This is called a flight dynamics model, and is can be used for stability and controllability calculations. This is not the type of simulations I am referring to. In all the theory covered so far, I have treated the aircraft as a point mass without inertia, which can be represented with three equations of motion. This means it can be used to compute the trajectory of an aircraft, but it cannot be used to compute rotational motion around the center of gravity. This also means that it is applicable to simulations of a longer time skill with a larger time step, such as a climb of several minutes. Flight dynamics models can be used to simulate problems which have a smaller time skill such as a roll maneuver from level flight to a specific bank angle, which only takes a few seconds or even less. So why do we need aircraft performance simulations? It is needed to make accurate computations of the aircraft trajectories, fuel burn, performance parameters such as time to climb, minimum turn radius, etc. In situations where it either becomes too complex to make use of analytical computations or when very accurate results are needed. An example of a complex situation is the trajectory of an aircraft landing in crosswind conditions. In such a case, the crosswind may be varying as a function of altitude. That significantly complicates the equations of motion and makes it hard to find an analytical solution. An example of a situation where high accuracy is needed is flight planning. Over here, I have a flight management computer. In some sense this has a similar functionality as a smartphone when used for navigation. This computer can help in planning a flight. For a given trajectory it can accurately compute the fuel burn and thereby the amount of fuel needed. It can also help to calculate the most optimal trajectory of the aircraft in terms of fuel burn or another quantity such as time required to get to the destination. It is also possible to put many accurate results in an extensive database and to provide these in a flight manual. Another example of the use of aircraft performance simulations can be in the context of aircraft design. Whenever a new aircraft is designed or a design modification is made, performance calculations must be made. Ideally, all calculations are very accurate such that an aircraft will perform exactly as predicted when its first flight is made. Concluding, aircraft performance simulations are needed both for aircraft operations and aircraft design. These simulations should not be confused with pilot simulations. The question now is how to develop aircraft performance simulations and that is a whole different topic." "TU Delft – Your Online Learning Experience","https://www.youtube.com/watch?v=61x9LtpkPSY","Hundreds of thousands of students from around the globe have already profited from the depth of universities and online courses in various topics such as aerospace, design and sustainable innovation, civil engineering, computer science, and solar energy. Being able to complete my message degree at a great university without a cost of moving country is the main reason I chose this course. So, T-U-Deft offered me the flexibility to work at my own pace, but we still have the opportunity to participate in weekly discussions with designers around the world. So, yeah, the online masterclass was really a unique experience in that way. I was worried that online service security course would be too abstract, but in the past six weeks I've gained a lot of knowledge through real-world assignments and in fact these have helped me drive change in my own company. Of course staff was very helpful and very encouraging, especially when came to completing assignments and our professor he would create a video each week in response to our online discussions which was awesome. At T-U-Deft, we value student instructor contact time. So, even in full online courses, we try and make sure that there is similar access to qualified teachers to help you in your course. We are committed to creating interactive and accessible online courses for a variety of people, from graduate students to professionals and leisure learners. So, how did students perceive their online learning experience? We took hits in a job and it was easy, but it was definitely worth it. The learning platform was really easy to navigate. All the assignments were clearly laid out, which is really important when you don't have a much spare time. What makes a T-U-Deft course special? At its core, learning at a university is not about getting a piece of paper that helps you get a degree. It's about teaching students to think in a very specific way. The thing I like most about studying online is that you can decide on yourself, when to take lectures and when to work for school and you can decide which time you use for all of things. It was all the extras you get. It was the extra material, the activities, the real-world case studies. In fact, I finished the course for the Deeper and Border, understanding that I expected. We invite you to join our community of students and to take the next step in shaping your future." "Waste Reduction Programs for Hospitals - Video Lecture (Sample)","https://www.youtube.com/watch?v=BT8LLPMZBvA","First of all, here you see a very nice comedic of the nerve packets that focus around the type of products that you can make from other waste streams. This is important because you can process a thousand thousands of tons of waste into new raw materials, but they also want to ensure that you use them for the right burps. Now we currently have 30 instrument parts in components and I have brought the types in stock that are made from this kind of waste, but we see that there are thousands and thousands of products that now are being made from raw material that can actually be made from our new basis material materials. So what you see here is three examples of instruments and parts that are actually being developed and are now placed on the market. For the top one, the instrument opener we even have already seen, certification all that so it can be sold and it is currently being sold in eight different nations. So it is possible. But also it's interesting to note that we cannot only make material that replaces other material, we also develop a technology that allows us to prevent the creation of waste. For example, this is a very nice technology currently in developed by one of the product out of our students and one that actually does it allows you in a controlled way to seal the reps that are normally used in order to ensure that the cleaning process of trays and instruments is a property executed. So normally a package is folded or out and tray before sterilization in order to close package. You actually use tape. The tape is made from paper and the problem is that it is paper. It is a waste seen that basically does not go very well with the polypropylene. So it builds up in the filter and then you have to stop the whole process in order to take it out. Well it is very simple to here. You don't need to tape anymore. You just seal the parts and the flaps of the package and you connect them and therefore you don't need tape at all. When this saves us a lot of time and energy you money within the recycling process of this kind of blue material. So another very nice example of the technology that prevents the generational waste is the Adelaide project. Within this project we develop a new robot platform that works with reusable instruments instead of disposable instruments. So nowadays per procedure, three disposable instruments are being used and those are disposed after surgery. This generates a lot of waste but it is also a huge financial burden because of the high costs related to the discounts of the twins instruments. So you can imagine they could not throw them away anymore but you build them in a water away and you use them for hundreds of times that this kind of platforms become more accessible for also the low middle income countries or the hospitals that are a little bit in between of financially wealthy and a little bit less developed. So that is not only a huge impact financially wise but also is very good in the fight against the disposable waste." "Introduction to Basic and Advanced Techniques in Machine Learning #ML #AI #machinelearning","https://www.youtube.com/watch?v=9i54PlQ2Cls","Hello, my name is Tom Viering and I'm an assistant professor at the TUD. So we are introducing these two new MOOCs. The first one is called Supervised Machine Learning. This course will cover the fundamentals of machine learning, focusing on classification and regression. The second MOOC is called Introduction to a Supervised Deep and Re-Eforcement Learning. It introduces four more advanced topics. Therefore, we recommend that you take the Supervised Machine Learning MOOC first. Let's discuss why you would like to use machine learning in the first place. It turns out that programming computers can be rather difficult. The key idea of machine learning is to let the computer program itself. We will show the computer examples of what we want in the form of data and the computer will teach itself how to do the task that is illustrated. Given enough data and compute, we can achieve really impressive results with this technique. For example, we can design better algorithms than we as humans can design by hand. So this is a really useful technique to automate boring, time consuming or difficult tasks. So, okay, let us talk about this MOOC that teaches supervised machine learning. What will you learn? This MOOC focuses on building algorithms for classification and regression. So what is classification? Well, in classification, the machine or the model has to sort objects into different categories. One example you may be familiar with is that of spam filtering. The spam filter sorts your email into categories of spam and non spam. In this MOOC, you will learn various techniques how to build search classification models. In regression, a model instead needs to assign objects a number. An example you could think about is predicting the success of a marketing campaign in terms of the number of sales. Say you use a particular budget for your TV advertising campaign. How much sales do you get for this particular budget? This is a typical regression example because we need to predict a number to number of sales. In this MOOC, you will learn about three more advanced topics called unsupervised learning, deep learning and reinforcement learning. Note that deep learning and reinforcement learning are very large topics. This course will only cover the basics. Let's start with unsupervised learning. In unsupervised learning, in contrast to supervised learning, there is no clear prediction task. We have a large amount of data that we want to make sense of. As an example, you could think of a web shop that has a large amount of data about its customers. In such a situation, being the web shop, you want to analyze this data to better understand your audience and to boost your number of sales. So we cover two topics regarding unsupervised learning. In the first one, the mentionedality reduction, you will learn how to take a high dimensional and complex data set and reduce its dimensionality. For example, this can be used to turn data sets into 2D data sets, so that data sets can be easily visualized. The second topic that we cover is clustering. In clustering, the goal is to find meaningful groups in your data set. For example, if the web shop, you may imagine you will find different clusters that represent different groups of your customer base. Deep learning is a supervised machine learning technique. It uses artificial neural networks to solve more challenging classification and regression tasks. In this MOOC, you will learn the basics of how to build such deep neural networks. They can be used for various different inputs, such as analyzing images, audio, speech, text, etc. And as such, they are the building blocks of the most recent AI models that are being released. Re-enforcement learning teaches an AI algorithm how to interact with the environment. Examples that you can think of are an AI that learns to play a video game or that controls a robot in the real world. By the way, did you know that chatGPT was actually trained with reinforcement learning? In this MOOC, we teach you the basic concepts of this technique." "TN01_2023_College_Analyse_SKC-video","https://www.youtube.com/watch?v=qsHO7Ke6EJk","Good morning. Welcome to all the first colleagues, analysis. What is analysis? You have to be aware of all the difficulties. And the same with the linearities, and the linearities, the stompularities of the viscunde, where, yes, the whole viscunde is built. You have all the viscunde nodes, where, how, what you are going to study. And if you are aware of what is going to happen, then it is my answer to the nature of the viscunde. So, with the help of viscunde, and yes, as you have no viscunde, you can't have any of them, and of course, the help of the viscunde. So, you are going to know all the important things. Of course, the nature of the viscunde is a secret, F-S-M-O-A. And the viscunde is a tal, we are going to read a tal, and we are going to read a tal. And yes, there, we will look at the viscunde. The logic of the art, the logic of the reading, the logic of the reading. And so, what is more to see, I have made a statement. And you read the course of the basis for the previous facts, but also all the techniques to write a academic rule. I will read this again, so it works to solve a discussion about the Poor Area Glob 망 Willion, and probably 911. Today we're going to have an overview of the vector recognition. It's the input and output product. For Mülers, I have a list of concepts that are all underwerpable. We're going to have a look at it today. It's the enormous list. So I have to think about it. But it's hard to see if I have all the concepts that are connected to it. What is the benefit of the vector? The beginning is a vector. Exactly. What is a vector? A vector is an element of a vector. But you have to think about that. A vector is actually a part of a vector. In the computer, you have a part of the object. But for a vector, it's different. A vector is a variable, a big one. A right-hand and a right-hand. So a vector, we say, as simple as a vector, we also have all the vector values. With a vector, we have a vector of a vector. And as a work in the three-dimensional range, then we have three components. So you can say that you have an X and I and a Z component. And I write that then, as a column with H, X, this is the X component, you have one, the X component, you have two, and the Z component, you have three. But that's what we're talking about. And I have now the XI and Z as the three-dimensional range. So that's the vector range, where we have the X, which is the R3. But the range is a really important point. In the first instance, you have XI and Z. And you can have a point here on the point there. Later the point where you're going to use P1, P2, P3. And here we have an other point, C, which is the three coordinates. So the subtleties for the X and a point are that a point is a vast location. But a vector is a layer of a ring and a length. So you can play in the Q verb in the course. And you can play the vector U. And then the U, the vector you like from P to Q, but as a vector, we start in the origin form. And I make it even further. And even longer, then it's also vector U. And this is not a two-dimensional vector. But a vector is a layer of a object with a ring and a length. And as you know, you can play the root. And that the root of P is the Q. Then you have to find the component. You have to start at least a point. Because the D is, you say, the U, the P in the Q, then you can play the Q in the Q in the Q. And by P in the X coordinate of U, you have to tell the U, that you have to find the point. So it's actually Q1. Q2, Q3, you start at least a point. P in P 2, P 3. This is the vector that is on the middle of a school. P Q, P is also a number. But for a lot of efficiency, we can play one letter. And we are still what efficient, because I can also say, why you do not have this, also a vector. We also have the position factor, as you have the vector of the origin. Then P, and then you can say, OK, this is actually P. And that is, of course, the vector that is going to be null null null. Now, P 1, P 2, P 3. So that is of course, we have P 1, P 2, P 3. So we do not really have the point P, and the vector of the origin is P. And then we will notice P, P, P. But there is not a point in which the vector can be found, but it is also parallel to the vector. And the reason is that you have the vector of the origin. Because in the course, you have two big things. And the big thing is a observable, you can see, you have one thing. And a big thing is also a result, that is, we have a scalar, a scalar, or it is a vector of the origin. And then you have to be a piece of paper. Then it is an important thing. If you look at the origin of the origin, you can see what I am doing. After a lesson, from the scalar, and the vector of the origin is also a scalar. So I have to remember, for example, the g of the point, but that is not a big thing. And the difference is that you can use that. You can give a vector of the origin, and you can give a vector of vector. Cut. That is the beginning of the F in the formula F of the M of A. Then you have a piece of paper. You can also give a vector of the origin. The result is the A of the M of A. So it is very good. F of the M of A. Then I can say that the M is of course not a vector, but that is a total of kilograms. And you see, it is a scalar of the vector. And if the result is a result, it is a result of the speed of the speed. So it is also a typical vector of the origin. Here we have to see. And you can see, you can see, from the course, the classifier, it is a scalar of the origin. So I will say the energy that we have then a scalar of the origin. Okay. Now we are going to see what you have to do with the vector. We can do a vector of the origin. That is the result of the speed. So I will put a scalar of the origin. We have the two vectors. The one in the number U1, the U1 and the other in the number F. And the U has the component U1, U2, U3. And F component V1, F2, V3. And I can tell you two, when you are up, you can imagine that you have the power of the U. And that you have the power of the V, the reverse power. That is the rule of the V, and the vector of the U. Then the U plus V, the blue vector. That is the top-side rule. We have the result. But what you do is the x component of U. And that is the x component of V. And I can imagine the x component of the sum. So U plus V is the component of the component of the x component. And that is the result of the result. We will do the same thing. U1, U2, U3. So we have the 1, V2, V3. And I can imagine the new vector U1, plus V1. So the x component of the sum, vector and the force. U2, U3, plus V3. So the point is the definition of the vector of the vector in the three-dimensional range. In R3. Now, I have to add in a very special vector. So the vector that is much more important is the vector that is much more important. And the length of the null. The most important thing is the force from the naryde-orsprung. So that is the null vector. So that is the first of the lines that we have now started out the lines. The null vector has a symbol null, but the number of the nodes. So you see, it is not the result, but it is a vector. And this is the null null, which is the x and i in the Z components. So that is a result. And then there are other special vectors that are handy. And that is the base vector of the rymet. And that is the vector e, e and k. And that is the definition of the vector. That is the vector e. And I write a root on the place of a pièce, but you can see what is the best notation. As you can see, it is not a result, but a vector of the e is the vector 1 null null. So that is the point 1 null null to the x. So that is the right direction of the x. And the length is 1. And what is going to be null null null null. And you have a k. And this is a handy base vector, as that is it. Because you can make your own way. You can tell, that we e plus e is going to be the top star. So that is the plus e plus e is of course the 3k e. So you can draw e. You have this one. You have a three-point null null. But what you draw is the length. So what is the first component? And you can also go into the e-riding. Let us take one k. And then we will add 2 k. The k is the vector that is the set of the set. With the length 1. But you see that you have obtained. Then you come into the point 3.1.2. And you say, you say 3k e. Huts plus 1k e. Huts. You Huts plus 2k k. Then you can draw the base vector on a compact menu. On one rule. The vector 3 and 2 are necessary. And then 3k e null null. Plus 1k e null. So 2k e null null. And you see what I do. You can draw it in order to make it all element 3k. And the option you have here on the right. So let's take a look at the 3k null null. Plus 1null. Plus 1null. 2. Is the vector 3.1. So this is a very common. The most important notation. To be honest, the value is more than the column. The column is very important in text. And you can also draw the code with the e-hout. You can also say, you Huts minus k Huts. What is that? This. Yes, you have to do this. No, no. No, no, no. No, no. No. It's not what you have to know. I have set the three-dimensional range. You can also draw the two-dimensional range. But then we have to draw one in the flag. So we have to draw the two-dimensional range. It's R2. Then you have to draw an X in an I. And then you have to draw two components. Of course. So that's actually different. So I have here the vector. You can say that the vector 2A is actually 2A. That's of course also 2A, e-hout plus j Huts. Then you can't see it in the right direction. But the vector in the R2, 2 elements. And the vector in the R3, 3 elements. OK. Yes, what we're going to do is to draw out how long a vector is. Because I'm saying, because a vector has a length and a right. And we have the following an adaptation. If you do this, we'll draw the three-dimensional range. One, two, three. Then draw the length of the U. Yes, or well. You can draw the three-dimensional range. If it's the right way, we'll draw the right way. Then you make the vector a long, long, long, long. And the vector is actually the same. The same, the next one is the same. The other one is the same. And then you can see, OK, it's not a vector. Now, the vector is the length of the vector. So for the right thing. Name the... Note the U. Yes, I think that's the correct answer. But it's not a sign. U, hoots. But U, hoots is a vector in the right of the U, but it's a length of 1. So you have to draw the following. So I'm going to draw the U. And I think that from this point of view, how long is that the U. The Pell is a total. So that's a scalar growth type. So you can imagine that you start here. That you... ...as you aim a meter, an constant 1 in the right of the U. Then you have the 1-th vector, as it says, the U. 1-th vector will be. It's a vector with the length of 1. So the right thing is, that's actually a vector that's a ball of your right. So the point is that the point with the 1-th vector is the length of 1. So the 1-th vector is the same. The 1-th vector is the 1-th vector, but the 1-th vector is the 1-th vector. And the 1-th vector is the 1-th vector. So you have to draw the following. As you can see, the length of the vector is the 1-th vector. You have to make it one and you make it one longer. So that's the 1-th vector, as it says. And you're going to draw an out-reck. How much you have to draw an out-reck on the U-th vector. Then you have to draw a bigger one. So this is what you have to do with an out-reck. In terms of the length and the right thing, the right thing and the right thing. And you can draw that out-reck. For example, after this vector name, 0-3-4, the 1-th vector is the 1-th vector. So the 5-th, you have to say, U-s. U-s. For this, you can draw a 5-th vector. You have to draw a 4-th vector. And the 1-th vector is 0. So you have the x-sauce. And here the y-sauce is the 1-th vector. You have to draw a 3-th vector. So this is the 1-th vector. So you have to draw a 5-th vector. So you know how you have a 2-th vector. The elements that can't be drawn, but how do you draw 3 elements. First, the right thing. What is the u-hout? How do you have to draw the right thing? Now, the right thing is a long-term right thing. With other words, if you want to win, then you have to do the right thing. And the right thing. You have to make a cut mark. Right? So u-pale, that's the fact that 0-3-min 4. So you have to draw a 5-th vector. So you have to do the right thing. You have to draw a 1-th vector. So I will draw this, of course. And then I will draw the left thing. And then I will draw the u-hout. U-pale, the u-hout. So that's the next one. 0-3-min 4. You can draw a 5-th vector or a 5-th vector. And then you will draw a 5-th vector. And then you will draw the right thing. So if you have a vector, you have to make a total of all components. So you have to make a total of all components. And then you have to do 0-3-5-4-4-4-4-5-4-5-4-5-4-5-5. This is your u-hout. Now, the right thing. So, from which I draw the right thing, so the right thing is 4-5-4-1. The left thing of u-hout is of course a one. The left thing, that's possible, but you see that the clock. It's a 5-th vector, the 3-5-th vector. Plus, it's in 4-5-th vector, but with the 5-th vector, it's not a difference, it's a quarter-mode name. Because the Schaunese have to be left, the sum of the x-qurat, the sum of the i-qurat, the quarter-fam, equal to 1-1. So, that's in the dot 1. The z-bapau in a, the richting factor of the richting's 1-th vector, a vector that you like the 1-th, but the z-bapau is always u. Okay? But the left is from a vector, with 3 elements, I have to do this, but, we see that with this vector, what is the length of a vector u, with 3 component u, 1, 2, 3? Yeah, that's what we're going to do, from where we're going to go. Take an even, block, the rhythm is now, we're going to have a look at the vector. Let's see what this has to look at. Here's the origin, here, the outer, the u that you're going to actually, here in the 20th, you're going to go over. That's my u, and I'll wait too long, u is. But you can, you can, this, three-marker. So, this line, this vector, which is that vertical, so it's actually the vector, now, come to u, 3. I don't have x component, I have x component, and the third component, u, 3. So, this vector, you don't have to do this, which is horizontal, so that's actually the vector, u, 1, 2, 0. And the row is the row, the row is the row. But you see that you're here, how far from 90, this is the vertical. And that's the point, that I can, the vector, u, can be used with the pitage, as in this row. That's what I'm going to do. So, the point of u, the small side, from the side, in the quadrat, the left of the row, in the quadrat, plus the green in the quadrat. I have no letters for this, so I'm going to say that the row is u1, u2, 0. So, you have to clarify the pitage, and the other, you see, the outside, which is a u3, that was the length of the vector, yeah, 0, 0, u3. So, you see, you have now the pitage, for the standard the row, the row is the width, but the first is the quadrat from the left, from the row, but this is also in a right, the correct, the row, because this is here. So, if you have the right, you have 1 and 2. So, you can do it again in the right, how are the three? So, you have 1 quadrat, plus the second quadrat. And the green has the length of the row, because it's always on the right, but yeah, but no, the third quadrat, so you see, the length of the row is the right, the right of the three components. So, that's the length of the row, and of course, the length of the row, how do you break the length of the vector, the component, the row, that's up, and then the one, you have the vertical name. So, the right of the row is the width of the row, plus the third quadrat. So, the main step of the pitage is there, because if you do the three, then you have the width of the row, plus the third quadrat. Yeah, for both. Still, now, that you have to be able to pull the 1-it-srichting vector, that is, also, the normer of a vector, that you have a vector-wind, so the u-hout, the length of 1-it, that's the u-hout, that's a length, for this vector u, and then this is a for-belt, that you have to make a difference. And then, if you see that the component minus minus minus 18, you have to put the weight of the row, but that's not really handy, because if you have to put the weight of the row, the weight of the row, you know, but you can actually manipulate it, because you have a minus, you can only do it, so you can actually say, it's got the weight of the vector, 6 and 9-18, but if you're still at this point, you can say, okay, you can do it in 3-18, 9 and 6 and 4 are the 3, so I'm going to say, it's actually a big factor, 2, 3 and 6, 3-6-18, and less than 6-18, it's got this weight of 2-6, but then, 4, 3, and that's the only way to make it to be able to do it, you have to put it in the minimum, but it's going to be the other component, so the weight of the row. So I'm going to put the weight, so first, you have to put the weight of the row, so the weight of this row, but then you have to put the weight of the row, and then you have to put the weight of the row, and now you have to put the weight of the row, but you can see that if your weight is long, it's going to be a little bit longer, so the weight of the row is less than the weight of the row, which is the minimum weight, so it's actually a big factor, and this weight, that u is u can write as c in another factor v, c is u vector, is vector v, and then you have to put the weight of the row, and then you have to put the weight of the row, so the weight of the row is equal to v, so you have to put c, which is not good, you can't put it in the same way, so it's not too much. So it's going to be a little bit more than this. Yes, as I have written here, I have written in the written name, and what do you have to do? That can happen. You have to put an end to the other thing, yes? Yes, but you have to put it in the same way as that positive. So what you have to do? You have to put the weight of the row, but this is the line, that is the symbol for the length of an effect, and I have already built it, because I think it's a marker, what I will say is, as you have a vector v for me, and I will say, how long is it? It's not going to be a long sequence, because it can be in the same way, what you have to put in the same way as the plus 3. So you have to put the weight of the row, so this is the point, because this is the point, because this is the point, that is the point of the row, and the other step is the length of the vector, and because it is not the point of the vector, and because it is the point of the row, we have no other notation, but it is another principle, so this is the real rule. So you can say, okay, the absolute water of the row, you have the length of the row, and the absolute water of the row, but you see, now you have to put the two rows of the row, so the row is the same. The row is the same, so it is a 40. So it is the same as the previous row, so it is a vector with length 21. So I have to put the row, so I have to put the three rows, because I have to put the row, and I have to put the row in order to put the row, so it is a little length, and it is my 3, and it is 26. I have to put the row on the row, so of course it is going to keep the row. This will give you the right angle, so you have to put the row in the right angle, so it is a little light, which is the opposite, and the left angle will be the right angle, So to make it seven, this is the second seven, the sixth seven. OK. We're in the program. Yes, stand. You have a question? Who can not get away? No. Who has to be? So we're going to be in the program. But the whole world is in the program. So we're going to be in the program. And the seven, the most important thing is to be in the program. The same side, the basic, the whole system. This is not a problem because this is a quarter and how short it is. All right. Thank you for the better. The U-Hoods, in the right direction of the negative exercise, as a negative, as a negative, as a negative, as a negative. If you're in the program, you can also get away from the program. So if you're in the program, you can get away from the program. So the first two points, I'm going to say, P and Q, and then the fourth one is for a ball, and then the second one is for the ball. That's the fact of the Ourspond, for the P. Then the third one is, what you have to do, you have to make the difference. You have to make a P here, here, and Q. You make the vector of P and Q. Or do that, that's for the difference. P with Q, but one point, a little bit more. Or for the opposite, it doesn't make it. This is the difference. This is for this white ball. And then you have to make the length of the name. So this is the point, that you're in the state, but the state is the state of the state, and then the length of the vector, that's the Ourspond. OK. Let's look at the difference. I've already said that. Here is a slide in the angles, with the definition of the angle. You have to look at the angle, but here is a question. You can see what's going to happen. The two, the most important, the three, the most important factors. Can we, by a two-dimensional vector, a three-dimensional vector, is that no, because the candidate can live in another room, or can that be B? Yes, because two-dimensional vector is a factor that is actually sitting from the three-dimensional rooms. This is a bit of how the definition of the question is, how we can express what the meaning is. So what do you think? What do you think? No, you can't put the two-dimensional vector on the three-dimensional vector. And what do you think? B, you can put the two-dimensional vector on the three-dimensional vector. OK. Wait a minute. From now, that's not. B is, it's, ah, it's good on the other hand. It can't be. You can, of course, it is misleadened, but a three-dimensional vector, that's a vector A, B, C. And it's there plus C, D-8, is this, but not. Because the past is not the first vector, an element of the R3, an element of R2, and it's in the difference between the rooms, so you can't put it on the two-dimensional vector. Of course, we can define C, D-9, but that's not the vector C, D. You can say that you're asked. C, D-9 is a vector, the R3, C-D is a vector in the R2, it's not the same. It's not the same, it's not the same, it's the same. I don't know how to use the same vector. Okay, now we're going to be going to be the next one. We're going to be going to be the long-term work, the dot product or the input. You can actually define the vector, and just to define the three-dimensional, and you can't see it, I can't tell, but I can do C, C, D, that's the scale of R1, because we have a total vector, and the only thing I do is that I'm going to make a long-term work, as C, negative, C, or my name, but that's the C, 1, C, U2, C, D, D. The definition of the scale of R1, is because I've seen it. For example, 3-year-old, that's a 0-3-0. Then we have the input, where you're going to make two vectors, U and V, and for me, the difference is that you have a dot product, and you have to write this point, that means you don't have to write it, because you're going to find a thing, and you don't have to write it, because if the dot product is, then it has to be that way, because you see that you're going to make a special difference, and then you make a dot product, and the two vectors are a total. Then we have to write it, because the two vectors are a total. Then we have to write it, as a result, the definition of the factor in the three-dimensional values. This definition of the input, because the vector U1, U2, U3, and V1, is U1, V1, and U2, V2, and U3, and V3. That means you have to write it, because you have to write it, because it's a very special combination, so you can see the matrix, you see that you don't have to do it, or you're UKV, or you're the first one, then the u of the result, so you can say, one of the rules is that UKV and VKU itself, and the other one is, but if you have a factor with itself, with the dot product, U dot U, U1, U2, U1, U1, U1, U2, U2, U3, U3, as the sum of the square of the element of U, I think, I think, U2, U2, U2, U2, and the letter V, the U2, and the result, but this is the length of the square, so that's also another reason, the input of U itself is the length of the unit square, so if you're also the input that can be inputed, you can use the input, with the same value, and then not to give the result, and the other, due to the future scheme of U2, the comparison relation of the middle of the square 했는데 square pork. I'd put an outside rule for this individual bank Até圓, meaning the withdrawal evidence, and now,reek diagonal investment, and finally, with 7 different rules. You can take a picture... Where criterion, freedom, because the hunger you civilians are, after dinner. I don't have a view from a perspective. Yeah. There's a much less nauseous texture than the US. I'm only going to acknowledge that the V name is to be able to see one 7 on the lower hip psi. Another 1 on knee. An 19 of no anymore. Because most of those difficulties, but perhaps there's a reason for that. There is an empirical reason to get a 7 this 10. The only 7. use the same thing for profiting testing, is an errors. It took many months to implement a shared selection of testing cases on the COVID-19 Кстатиers' localrenits The head, automatically, stripped with indicates Second procedure of Dialogue in the convicted aktuell Research... Now, what do you think of this input? U with the null factor. Is it the orthogonal factor? U1, U2, U3. That's null null. Yes, so you can only use two null. So you can also use null. But you can't say that it's 90 degrees. How can 90 degrees? We have to get it out. But it's the orthogonal one. The input is null. That's so so. This is meant for secondary program technology. You can do that on writing. Dear้, why is the input in the interслole? You have the potential to get this property. I could speed up would that factor as it is. A vector mentions the residues that circulates so' a sense that it can rotate. And after you did it it's not that you put it in the legal procedure, which should be automated, you can put into this application and you can amount to and decide what I do if you go, and you have completed all of this, and get certain valid support of the implementation of the course independently, I'm going to need some viewlessness. We still have space to use. but it doesn't necessarily matter. And the point is, as I say, the manufacturer hears that saying is transmitted to pay their price and so on is only for all other aspects. And when I say an individual to make two current cars, I explained it that I simply don't have approximately 3, that is population production friends, And that's it, of course. Because I have a product from IHoot with you Huts. IHoot is a normal IHoot, it's a normal IHoot, it's a normal IHoot, it's a normal IHoot. It's a normal IHoot, as a standard IHoot. Then IHoot is a good thing to do. And IHoot is going to do less than IHoot. But yes, IHoot is a normal IHoot. That's it. So that's the reason why IHoot is a product. As IHoot and IHoot are... ...the third word in the third mission. I said that IHoot is a new accent word and the VH accent word. Then I have a new accent word. I have a strange accent word, but it's a product of itself. It's a combination of Cm and VH1 and VH2 and VH3. It's not a coincidence, but this is why IHoot is a brand called Mac and Scala. And it's a product that is created by the UIT. But you can ask that. I said, okay, IHoot is a product that is new. The technique of the input is also different. What is the input? It's actually not a hand of the word. Relative word. Now, as I said, the input of UIT is not the opposite of the orientation. You have the same accent word as you can. I can actually say that the UIT is a very high accent word. I think the UIT is a bit different. The UIT is a bit different. But V is a bit different. So, I think the UIT is a bit different. I think it's a bit different. And then I can see what the input is. Now, I can really look at the word. I think that V is a bit different. That's what you can see from the outside. U1 is a bit different. That V1 is a bit different. I think that's a U1, V1. And the rest is different. And the U1 is different. It's the input that is different. But U1 is a bit different. That's actually a bit different. That's a bit different. That's a bit different from U. I think the U1 is a bit different. But that's what I think. And what is V1? V1 is the X component of V. And that's this horizontal distance. I think the V1 is smaller. This is the V1. And V2 is this vertical distance. This is also the 90 degrees. This is the length of V. We have a right angle. And this is how to use a V. As we have a T. Here you can see now that the V1 is the angle of the side of the right angle. So you can see the V1 and the T. And the V1 is also the same. With the G.O. number 3. So you have a 3 with the T. The cosinus of the right angle. The angle of the side. But the angle of the side is V1. And the angle of the side is the length of V. That's the V. So this is the relationship. So V1 is of course the length of the vector V. Here the cosinus of the hook. And V1 is smaller. And the factor that we have with the smaller is that the cosinus. Cosinus is smaller than 1. So V1 is smaller than V. So you just have an interval here. Then you see that the V1 is V. But that's the length of the vector V. The cosinus of the hook. The same is this formula for this situation. Inproduct of U.M.T. The length of the hook. The length of V. The cosinus of the hook. Thus the vector U.M. And I have a new one. So in this situation. But I have not said that. But you can rotate the vector. The input is also the same. So this is all the time. So this is the same. But the input is the formula. You see what the consequence. It is the same. Yeah, that's the result. It's the smaller than the product of the length. Because the cosinus is smaller than 1. And you see the same. As the hook is still there. Then it's connected to the product of the length. As the hook. 90 degrees. And as the cosinus from the hook. And the product of the length. And the input is equal to 0. So the final factor is the input. And you see that this formula can be used to rotate the hook. So what we can rotate the input. What we can see. The speed of the hook. But you can rotate it. But you can rotate it. So that the next point. The length. The hook. The theta. The distance. The u.M.V. You can find it. With this formula. But the other side. I can say cosinus. It's. As the length is actually d. The length of the u.M.V. And it's the same. So the input of u.M.V. The product of the length. And that's the same. So I'm using u.M.V. You can do it again. But this is the same with the practice formula. The hook. The theta is actually the arc cosinus. But since you know how cosinus is. So this is the better way to write it. Look. We know that the l.M.V.V. And the length of the u.M.V. And the right. So the u.M.V.V. So the u.M.V.V. You can do it. The cosinus. So the cosinus. So the cosinus. So the cosinus. So the input. So the input. So the input. So the input. But you have the u.M. You have the rules. You don't have the rules. And the number of times. But I'm using the b. So the fact that u.M.V.V. So the length of the input. And the input. So that's the same. That's the last thing I'm going to do. And I'm going to do the u.M.V. So the use. The input. So the input. So the input. So the input. So that's what I'm going to do. So that's what I'm going to do. So I have to put some other side. But I can say. The u.M.V. And then I'm going to do the u. And I'm going to do the u. But the u. You can use the rules. So the cosinus. So the u. And then the u. And then the input. And then the u.M.V. But you can use the rules. So that's what you can use. So the u.M.V. And the u.M.V. So the input. So the input. The input. The input. So the cosinus. So that's what I'm going to do. So the next. The next. The next. The next. U. So that's what I'm going to do. For example, in the one project of the product linear print. konkret Platinum language here. People from the real factory were left out in U.M. And even their managers informed can perform the work of 0ing at the inside which this project was based in Russian. They already think about it. And that's what I'm doing. So you can make it even more legal. you know, in fact, the Z光 will be abroad, and in back-twomed season 31 on your first mind-t 2022 reviewer, in the United States. Now you'll apply business rejected, which has to be bestowed, regardless of price buying, which is necessary, to keep your decisions in the way it is. You can do anything for next. There are quite some purchases. OK. Let's see if the questions have answered. Here it is. On the slide, the definition is how it is now. Let's look at the file. In the case, we have a E, which is a A, with B and A with C. You can conclude that E is a B, which is a C. Is that A, correct or B misguided? That means you are a mislead. What do you think about A? What do you think about B? It's not an answer. You can't do that, here it is. Here it is a C, number. But that doesn't matter. The number is a vector. You can't do that, then. Here it is a input to the number. But an input to the two vectors is a vector. Because A, C, that was A, 1, 2, 2, 2, 3, etc. Here it is. Here it is a vector, the number. So that can be well. This is our algorithm. It can't be put in an E. How is it that here we have a input to A, B, so A is input to C. You can now come out a total of a vector or that is also a result. What do you think about A? A total? What do you think about B? A vector, which I think C, as a result, on the find. In the find, we don't define it, but the read is that the 9-9-9-9 is the one that is not going to be released. This is a defined one. I have to do the first thing to do. Out of work. Here it is. Here it is. But the state of the vector is a vector. But the vector that is going to be released on the left is a vector. But the link is then in the end. So this is not defined. This is not defined. This is not defined. What do you think? Now it is that the A dot C, A dot B, C. It is A dot B, C, and a scalar. A vector of that is not defined. What do you think about this A and scalar? What do you think about B? A, what do you think about the find? Now it is going to be good because A dot B is a total. So this is a big vector C for me. That is not defined as a total. Vector. A, your B dot C, is that scalar vector is not defined. What do you think about A? What do you think about B? What do you think about C? It is a scalar. It is good to be able to see B dot C's and the link of A, that is a total and a total. Then we will go to the next class. You can also see the data. Here you can see what is not defined. Okay, what is the meaning that you do this? That is actually a question that you have to do in-products. That is the input-product. So the only factor that I have to do in the field is that the quotient is not defined. But if it is not defined, then you have to do this. This is also a result. Okay, now we are going to project this. That is then the input-product. The last thing we have to do is to see how to look at the right side. Left side. I have no idea what I am going to do. I am going to see that in the sum of time. But as you have an effect on the right side. In fact, you want to see that the direction is in the right. But you want an effect on B. And then you want to see an even further and further component. That is going to be with the diagonal project. So you are going to actually create a right-hand value. Then you have the line from the point of view, the left side, the left side, the left side. And then this value is the project. So that is then in the book, the numbers that are in the right. So you have to look at the right side. In the data itself, you have to look at the right side. And I think that I have a different subject. Of course, I have a different subject. So I have to look at the left side. And we are going to apply the top of the top. So it is going to be a project from U. Up, V, not the same as the previous V. You have to look at the right side of V. So that is then U sub V. This is a data from the book of Adams. And I think that in the... As the Reuwector, in the top, we will start the projection project. So the small index is where your language is, and so the H is the factor that is reported to the word. So the state of the right of the V, but it is actually a component from U. This is what we are talking about. How does this factor go from now on? To begin, I can say that I have been doing as long as the factor is reported. U has reported that the value can be linked to the right. So I say what is the link to it. You will have a few. And what is the right thing? From the projection. The right thing is the right thing, because the right thing is of U. It is reported to V in the right of V. So the right thing is that it is in the right way. And what is the link? We have the code that is reported. From this point on, this component is the right thing. We will do the component. Component. As you component, the left of U, the code that is given to the code. Net is net. The code that is given to the component. Then the component is the code that is given to the code. But Net has seen that the code is given to the code. That is the input of the unit vector. So the next thing is the component is U. And the code that is given to the code is the input of the unit vector. So you have a link here in the unit vector. Here in the unit vector, you can see that it is a bit of a circle. And then the circle is the code that is given to the code. Yes, it is a bit of a bit of a bit. And you are right. What is the component? The right thing is U. But U. That is U. U.F. That is the component. Or, but the V. What is the V. It is a link. And then you can see that you have given the input. So you can do the first input. And then the V. That is the form that is given to the code. The code is now available. As you know, the length of the code is now the component. That means that you have to take the input. So it is the component. You have to take one of the vectors. This is of course, as you do the V. Two, the length of the code. And what the component in the right thing is not two. The length of the code is connected. So the length of the code is in the right direction. The length of the component is the length of the vector U. So you see, in the formula, the V. Two is the length. You have to take a two in the cellar. You have to take a two in the cellar. So you can take a two in the cellar. So you can take a two in the cellar. This is the component. And the length is this. That is what the component is. The object is here. What is the length of the vector U. That is V. The vector V. And the right thing was V. But that is also V. So what is the vector U. That is what the vector U. That is the product of the length of the vector. The correct vector of the vector. The V. Hoots, I have this formula. And that is what I think is here. But that is the vector. The vector is the same, but the vector is not. So I have all the same name. The one that V. We have the V. We have the RAT. The RAT. Or I can also say that the V. We have the length of the code of V. That is actually the product of V itself. It can also be made of this formula. This is the beginning of the project. The two, the other one, the other one. The one that has to be in the product name. The length of the V. That is the product of V itself. And that is all you have to do. Let's do one thing you do. To illustrate that. The other board. We have a point. P. 2,3,4. And that is a line. Door. 1,1,1. And the Ourspong. And then I will put the line. That is the line. That is the line. Why? So we are going. That is the point. That is the line. That is the line. So we are going. We take the line. I hope. Here is the point. 1,1,1. We have a point. P. What am I going to do now? The line is the Ourspong. So I just said that I was the vector of the Ourspong to P. That is the vector P. We are going to put the right of the line. So I have to put the right of the right. I have put the right of the vector. So the blue vector that is going to be divided into the lines of the long line. And that is the projection. Up. The vector 1,1. From the P. And P is the second one. So we have to do the following. Then you have to put your left hand in the product name. From P. I am going to put the right hand. Then you have to put the projection on. 1,1,1. If you see that I have put a sub script. This is a notation of the Ourspong. It is a row 4. So you have to do it. You have to put the product from the U with F. U. That is not the case. It is actually a component of the package. But it does not have to be divided into two. So it is divided into one and one with one and one. And then I will put the result. And that is the correct way to put the right of the line. This is the correct way. And you can see that the P is connected to the correct term. But if you put the right hand in the form of the V, then it will be divided into two. And that is the first one. It was in the second two. It is not the same as the first two. It is the first two in the number. And of course, it is the projection on the first one. It is the second two. The line is divided into two. So that does not make it. That is what I do. But then it is simply divided into two. So it is a 4. And 1 is 1 is 1. And 1 is 1. And 1 is 1. So then you take the number. 5 to 4 is 9. And 3 is 3. 3 is 1. And 3 is 3. So you are actually 3. 3 is 5. The line. So we are going to the right. The right one. The right one is 1. So I will explain the difference. I think you are going to use the right ones. Or the right one. But I have already explained it. So, you can see the projection. Okay, the component is just a bit too long. But if you do the projection, you have to move forward. So you see the component. Here the right one. Because the project is used. The fact is that the left one is right. But that is the right one. Okay. Okay, this is the left one. And this is the right one. We have to go to the right product. So all of this is what we have over the input. I can only give 2. 2.1. The front button is given. We have to adjust the distance from the left one. And the left one is that we have this flag. Flack 3.0. It is 3.0. Then we have to go to the right one. So we are going to go to the left. And the right one can follow. Good afternoon. Thisターarm from a platform in third glance. The right one is with an X and an A3. So the X and A are in significantly. And then pull the right button. I don't know if you have the straight lines of the heading, however I did this. But I definitely don't want this. The B-R-C-Z is D. It is a linear connection in the coordinate. And that is a flux. And I can still see this, because as I have two things to do, I have a normal node. So this is a flux. Can you see a direction in which the load is stuck? I'm taking a pad, so to speak. And I want to say that the next thing is... I'm taking a point here in the flux, and then another point in the flux. And I'm taking a D. We're in the flux, because it's 90 degrees, but it's normal and it's not. Can you see that a normal vector of this flux, that there are three nodes, a node and a node of 3, and a node of 3. One and a a. That's a meter, right? This is the E-V-V-R-C-Z. This is the state of the... I don't know what's wrong, but it's a C-M-E-T. It's also a load vector of this, and the E-C-M-E-Z is a X-E-Z. One is a normal. Now, as I'm going to the curve, I'm going to Z in the flux. So I can make a vector. And I have a V-S point, which is the point where this point is... I'm going to now, this is the curve point, and the V-S point, is the here near here, is the point 3.0.0. Then the V-S-V-R-C-D-V-R-C-D-0, and the X-E-Z, that's the X-E-Z, less than 3.0.0. That's the V-S-V-R-C. And that's the normal load vector, which is the normal load vector. So this vector 1 and 1, it's supposed to load the state to... this difference. What I'm going to say is, as long as you put the flux in, it's this load vector 1 and 1. In the flux field... This vector, I'm going to say, my 3.0.0.0. This is the point that I'm going to put in the flux field, which is the point that I'm going to put in the flux field. And the other word, the input is... the load vector, which is the value vector. And that's the difference. And what I'm going to say is, my 3-E-Mineau Z-Mineau... will be the value vector, which is the value vector. This is the value vector, which is the input of this value vector... with 1 and 1, that's the nullest. But as I'm going to put in the flux field, 1 and 3-Mineau X, 1. So... I'm going to say, my 3-E-Mineau X, 1 and 1, plus Z-Mineau X, 1 and 1 is null. But that's what I want. The state 3... is the X plus E, plus Z, and the 3-E-Mineau X. It's the state that we're going to put in the flux field. And what I'm going to say is, D, you can... if you have a flux field, then you have a C component of a vector, you can see the value vector of the null vector. This is a... the standard value vector, which is what I'm going to say. But the value vector has a load of the state and 1, and this is the value vector. So... how do you know that the value vector has to be or what it's going to be called. If the value vector is not going to be you have a formula to make a vector load. And therefore, we have the last one. From this list, we have the output. The output is the third value vector, which is the value vector. The input is the root of V, and we have the root of cross V. And the output of the two vectors is a vector. That's this value of the values. Now we have to give the component of the output. The first component of the output is a value vector, which is a value vector. So, the second value vector is a value vector. So, the first component of the output is a value vector. And the other one will not be so. Because the second component will be used, then you have to start with the third. And then the third one is a value vector, and it is a value vector, which is a value vector. There are two, but my calculator is first, that is, manyOYDLCs are not basic. So, you have 2, add a, add a, combine the values equation and combine the values equation and then add a. Rather than using the value of equals rootród, then we can use both values implementandular finance if we use the value vector. These are repeated. So, we can angle the value value vertices method. We have each other. It still appears. The value vector unit is already nostro. And the important thing is to use the X² log to the 2nd power velocity U in individual phase. The Though it requires a lack of only manual math as you ask the mo nak of the changewhere V led to more chromatization as in the bueno. U, size, V, as an input I name U, continuity. You can see a bit, right? You can see that this is the first individual component that U1 does. A state of the U1, U2, V3. But here you are going to have the two components, V3, U1 with the two main fields. So we have the same output, the input from this long-term formula, with U1, U2, V3, and the system, and we are going to have the same output, because the components are now. And as we are talking about V. So that is so cyclical, with cyclical parameters, with the previous one, with the indices, to 2, 3, to 3, to 1, 2, that is, with the minus, there we have all the same output, which is the most combination. This is also null. And then the left, the right thing is also from the long, because the state of the U1 can be set up, but you can't just put two units in the box, but you can put two units in the box, but you can put two units in the box, so I don't have any additional units, because the U1 is the same, but if you have a U1, V3, you can put the U1, and the U1, and the other one, you can put the U1, and then you can put the output, so the right amount of units is the right amount of the output. And what should we choose? People from U-head stay isolated. This is the distance from U, so I prefer when it comes to Seeners. And that<|de|> it looks calculated on most cars. Okay this is a much much more資 over stages and Vishnu? My name is Timo Natasha.ky" "TN01_2023_College_Natuurkunde_Rotatie_SKC-video","https://www.youtube.com/watch?v=51oY8JpI294","My name is WIMBUM, I'm one of theolutions that I co-founded Saturday. Free technology is confirmusting on all factories and physical exercises for our amazing foundation. On the myself that one of my favorite workout with me was Rotati stereotyped. First, when I'm communicating about an exchange from a province, to get the browsers over pays for inferior wellbeing. What I think is now also the biggest risk, is no holding the use of closed-ounging enterprises. It also deals with build and factory foundations, Lean for integration into excitation points. And I will leave the media with more knowledge of all companies and providing solutions Make use ofight-style procedures to make sure we build a unique atmosphere through different forms of Sellies and many other appel cases at the time. There is this collision with ourselves and that allows them to explain how fast they start due to us carrying out 있나ment. And of course, it's going to be a procession. And I'll talk a bit about it. I'm going to say, if I'm a student, I'm going to say, I'm going to be a school student. But here, I'm going to become a graduate student. And then I'm going to be a little bit more aware of how I've got to college. And what I'm really happy about is that we can go to the same school. And we don't work to do everything together, and that it's going to become a translation about how I didn't obtain it. And this instrument won't help me play this, but this time, my來到 conduct, and then the very necessary PHP word to this teaching and that's all she neglecting. thebahn. Produceled, in the Kern, a reaction. Theります is found where the bread should be just upon people forcing the alarm, because the bread escapes from the square. If it had been a return, the 손 line was not gone here, but a little bit more was the source ofmaths. Because the blue button Holz locked, You can use the key to the code 3. And there you have it, you can put a key to the reaction. And the key of the neutralization is a lot of water. It can be very good with water. And it can be a lot of organic material. It is a very good deal. And there is a lot of material to make it very much material. So it can be very good to use the key to the material. I used the key to the reaction. And it is neutral. But it has a spin. And the spin makes it more magnetic at the moment. And the key is from it. The key is to make it more magnetic. What you can see here is a macroscopic model of the neutral. The neutral, the spin that you can see when you are doing it. Here is a magnetic, a magnetic magnet. And the magnetic magnet is mounted on a triangle. That is a light-small, you can see it. And here on the back, it is a very large, large, large, large, large, large, large, large, large. And the key is to make it more magnetic. But it is very good to use the spin that is going to be neutral. And that can be very precise. It can be seen here. Because it is going to be more effective. As this spin is the spin of the neutral. It is now the product with magnetic field. And it is going to be a form of a spin. Now you can see that when you think you are in red when the machine is fixed. It is going to be a product name and aircraft makes a comeback. It is going to be attractive with weight. And if you accelerate only a few seconds, you can see your Most Fl Hit with a button. You notice your latina deer m UCLA sick off. In this location there are two applications that is transparent. Two learns, you have to know some main changes and why you can be aware of the texture and provides us with a extraordinary picture. Now, poems, the parameters, are now bringing levels from all directions, and repeat, they are precisely what they knows more about. You watch to experience all smooth shots and the same tool if you've finished talking about minics but in real-time we look at good little hooks and make really cool ones there we take exactly 60 kg so its prettyane and now have something different typical like... you sign up The personality changes coming down through one range, and BFusing a disorderly. This should suit the state of nature as well. A man can go away in a run. On the other hand , this will never ότι. In practice, this song is probably The B Shank is also BTS. 9th time in the hypothesis. Another power of my body? This is called the magnity of the smart car. There are lots of percentage levels in these cars, but it gets closer to the opposite rate but now it'sicians how to get a right center because they are bigger than me. A circle of ten percent. Even that means that the deposit case won't be stolen. And we can see that the structure is a bit high. And that's what we did. So the principle of the process is that we have also a mechanical variant. And we are now going to demonstrate this. First of all, you can see what the structure is. Yes, I'm going to show you the camera. It's a little bit of a series. So, I'm going to put it on two pieces. And I'm going to put it on one piece. I'm going to put the video on it. But we are going to ask you. I'm going to put it on a pin. I want to put it on a pin. I'm going to put it on a pin. So I'm going to put it on my back. I'm going to put it on two pieces. And then you put it on the back. And then you put it on the back. But you know what the structure is. So I'm going to put it on the back. And then you can put it on the back. You can put it on the back. It's a bit more difficult. But in kalau you can load it, we'll start with the asking rule as if anybody should go on. So we're going to put 1 piece down on your back. Otherwise you don't have to set up another one. And if there were only 30 of those, I would assume that haha. If you will. So what you'm going to do is macht your doubts, you'll hard-to- Asian. You just have the information to open answer. Will you speak in English about her? interactive So when singing means you didn't speak about this Yes? I was just standing at thela de .. so I albums got them to say Yseke reporter just вещи You are a great player. You are a great player. You are a great player. That has been a video for a while. We will try again. So let's see what we are going to do. Or will we be able to see another idea? No, we are going. Okay, let's see what we are going. Yes, I like it. And if I am here, I will see you faster. And if I am here, I will be able to see you faster. And if I am here, I will be able to see you. Yes. Yes. I think it is fascinating. And the track is so fascinating. We are going to find a track of the track. The track of the track is going to be clear. How does it come? And we are going to get out of the track. To get out of the track. You see what the track is going to do. And we are going to try to get out. And before we start, The first start of the track has to be the track of the track of the track. I will try to make the track with the translation. Of course, it is very good. And with the slow track it is going to be. And now as we are going to see the great songs like a lighter track from an object. As a musician you see the 포인ers. So you read a few lines. You say the music sounds leave the rhythm of its stage. and as you've got all the movement you can see the movements to the position the movement used by the position now we can point Baltimore to my lawyer I think he goes with his speech said, like, even if he does not want to change, the 62ndeninism was not ready for this mistake, a different person and please là de la uneasy I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. Yeah, they're coming. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. I'm still in the middle of the road. We saw that. Theушки students can't do here. The 1990s in Heavens are which, quality and commodity here won't work. With this month'sensed fire, and that is why we got out today. We thought that there would not be enough people. In the right way. Not immediately put in a plate where your masks have volunteers. The most beautiful concept was that the problem had beenseen all that when they were chatting with their inh żołnier then they acted metallic. Now the tock was the traffic light. This brain doesn't brought any trick at all. Right? A little lower boundary that is on theirdaly. The Fifth<|mk|> is understandable. released by extremist decks and firefighters. That's a very good scenario. That is, it is an opposite. It is almost all that happens. It is the opposite. It is actually a unique way to give the right direction. That is, it is very good opposite. And then, the conference, that you will do it with your right hand. So, if you are doing this right, then you will have to do it with your right hand. And then the high speed that you have in a radical second, that high speed. So, this is the high speed and the high speed. And then the high speed is here. The effect. And now we are going to look at the correspondence that we are using with Translatie. And we are going to see the high speed. And then look at the sea. That's your head. Making sure everything has been I never expected it. There's a huge swelling in the catches with your First Yes, so as it gu�도es a little bit further for the throttle gevalle led and moved forward I couldn't remember. I took a bicycle while sitting beside me and i couldn't see a face. And because that left me wasамorndle*** stirring wouldn't make me �? I Davis was standing on the street with the Dassste crew. So this beautiful speech was just looking for a daily routine. Damn it's ok. I come up with a pair of infinitesis out. What we now can have. We can also have a quickness. With quickness, how quickly you have a quickness. For quickness, that describes how you have a quickness. So start with me. The first quickness that you have, is the light of the wind, the speed of the time. So if you are not able to do it, you can quickly finish it. And you have a quickness. You have an alfa. That's where a vector, that is the light of the wind, from your hook-snow-hates. In the time. Now you have to get to the point. As we are here, we have seen the hook-snow-hates in the end. What we have done, as this quickness is still there. And the correctness is on the hook-snow-hates. I have a hand, I have one hand. You have a hand, and you have a hand. You have a quickness, so if you go to the ground, you can start in the same direction. Now you have to get to the right, the correctness is on the hook-snow-hates. I hope you can see the right side of the hook-snow-hates. And we are talking about the process that you have been doing and the correctness is on the hook-snow-hates. What you have done is the first start-to-hook-snow-hates and then you have to get to the right, so that the right side is so straight. Now that the hook-snow-hates is on the right. And you have a plate of the word, you see the right side of the hook-snow-hates. So you can see the parallel angle of the hook-snow-hates or the left side of the hook-snow-hates. Or you can fold it in very different directions. That corresponds to these st sounds the same. in education and education. And in the same time, the administrative process. This is just, it's the first few weeks before the needy is gone. Before you start your career, as the papel system surely is set up. You get a bottle of water and a bottle of water,луш利ings dealing with books bourgeois city ecological parents age of yours the marriage with his music company Ah! She has not got too loud on me. Yes That's because of grief I am wrong, for a short time. She took desperateIsandum, a student of his mobile concert. He says the Aftand totem Masa. By this wheel. This is not a good wheel, so it's extra much Masa in the band. So, they are all Aftand, who are here. But what you say, Aftand is a long story. But it's another Aftand, who has already written this. This is a trail. This trail, where the foot of the object. Yeah, from the wheel. But what have you done to make it with the cross? Yeah, but how effective is the cross, where you are out of the front. The main part of the wheel. How do you do it? But how do you do it on the wheel before the wheel? There are a lot of long trips. Aftand, with the wheel, is a trail. I can do it here. You do it, and you do it. I can do it here. I can do it here. You do it. You go straight. What you see is that the arm of the stand. Totem-to-the-dry as a piece of the block. It's a big deal. I think I have to do it. And here you go, I'm going to make it. A fair decision from the stand. Totem-to-dry as a real thing to do it. What belongs is that it's a short moment. And it's a short moment, because you're scared. Totem-to-the-hips from your feet. This is the driest. You're going to sit with your feet here. And you open the door from this direction. And then you go straight. As a year-sit, there's a maximum of active. And you can also see what you see here. And you see it's a medium-sized. And you do the great things that are really long. That's the short moment. And the short moment that is a free-fall. So the goal continues to go a way? The greenek, and the Alp seems easy. A minute of the measure will reduce the increased you're going anti-subtling. And I do it when I'm done. Then I can look at the output. That's why I'm still in the mood. But as the ARM, in this direction, and the cuts are done, then you see that your cut-movement cuts in the direction. And you'll see that all of this cuts are done. Then you see that the cut-movement cuts in the direction, and you see that you can see that the cut-movement cuts in the direction. So this is a great opportunity to see that it's done. So the equivalent cuts of cuts in the cut-movement in rotation. The cut-movement is the output product from the ARM, the 2-paste cut. And I also have a... a question about... Oh yeah, this is the beginning of the ARM. The ARM is actually a sort of headband. There are a lot of headband cuts, you can see that the cut-movement cuts in the direction, and there are a lot of headband cuts in the direction, but you can see that the cut-movement cuts in the direction. The ARM is the street with the Fitsville. It's not the definition that you have from the cut-movement, and the angle's head turns to the structure, and the cut-movement cuts in the direction, and there are all the cuts in the angle's. What's best-luster is this one angular velocity, angular acceleration, but this is the structure in the angle's. But here you can see the flat and flat, with a quick look and you can see the cut-movement cuts in the direction, and then you have a short moment of notice, and that the output product from the ARM and the cut, which is not the way the cut-movement cuts out, the output of the 3-richt, so that it doesn't look like effective. And from there, the output product is a sign of it. And you can see the output product is a sign of it. And the output product is a sign of it. We have here a flat, and you have a wide range of the ARM, and the cut-movement cuts out. And in a well-correcting state, now the cut-movement cuts out. And now you can see the angle is a sign of it, it's a sign of it, it's a sign of it, it's a sign of it, and the cut that comes out before. So the cut is a kind of directing. And the ARM is a kind of directing. The cut is a little bit of a sign of it, okay, we think that in our Bovees, we think that in our Beneides, we think that the cut-movees, we think that the cut-movees, or even more people, when the ARM, the cut-movees, the cut-movees, on top, so they can keep this in time. It's the moment, buy it. The place of the spending session works well on purpose, right? If you don't have enough money, you always can access very well. Now that 갑ina was on different points We're not going to have a meeting with the school, we have a place to go. What's in the pool? Massa is a good thing. We have here the center, and we have here a Massa. Now, the music is constant. Air. And we're going to have a quick look at the scene. Then, we have the building, a pool. And the pool. They look at the Massa, the water, the water. What's up. We have a small swimming pool with beliefs According to public expressions. It's probably not my typical program. And with a strong outwatch. It gives us the responsibility of all 1992 levels. Humanity, this secret. We're holding a lot of�s. We have a lot of questions. What can we define that in the English Momentum? What can we define that in the English Momentum? What does it mean with Rotatzi? And that is the effector of the English Momentum. The object of the Dreias, with the output product of the impulse that you have, is the mass. But how fast it is? And hereby, the correct start of the Dreim Moment, or the impulse Moment, I will say. Yes, I see a lot of people. We have the air, you start a biote. That's the snow ahead. So, long straight, start the impulse moment. Now, now we can go out. Now, you have applied the method of both the Gebert. You have the Dreias. And then you have the Gebert. Start the tilt. And then you have the moment to look at. Then you have the tilt here. And then you have the impulse, the front ring, the back, and you have the right angle of the air. If you are going to look at the air, the detail, if you are going to look at the whole little hook. Then you have the air, the detail, the same as the air. Here I make the best of growth. As this is how small. You can see that this is the front, the air, but the front ring is in the hook. This will be a lot of detail to come into the structure of the turn, as we are going to do. What you are here at the front is the front ring, the back, the back, the front ring, the back of the turn, and the back of the turn. The back of the turn is in the right direction. This is the tilt through the t, the clock through the t-del. Then you start here, nix under the hook, the hook's now here. Here you can see that the the back of the turn is the the back of the turn. The tilt through the turn. This is the good one. You have to check if there is one. Here you have the hook, the back of the turn, the back of the turn, the back of the turn, the back of the turn, and how it is the lens. The lens is the clopper. Now this can be used. You can now see the the back of the turn, the front, the front, the back of the turn, the back of the turn, and what we are going to do, in that case, and if you have a grip, that is the big, armoured, enormous saw in the 100th start. That puts theحد and gray faces through the pivot. Looking at a small massage, lessרת, Now you have got the out work to a impulse moment that you have to save the mass to control it and the high speed and the constant in the quadrat. This is a long-hack grip to look at what you're doing here. You have actually been here. But then you have a lot of mass or a lot of rings. Now you have got the out work of an ink-lambassa that you make a circle. But we have a lot of things. You can also make it that was for a point-delete. That's a steel-combine. This was a steel-combine that could also be made in a single moment in the end. It's like the constant effect out of the impulse. You can also make a big object. The big object, as well as the drill, has a lot of mass or the constant effect of the drier. Then you have to press the subscribe and then press the subscribe as the track is moment. But the hook is now here. And that's a long-hack formulae in the analogy by the impulse that we have mass and mass analysis. You have the track-hits moment. But also the track-snow-hits. Okay. Then we are going to start with the track-hits moment where it was also a question for us. So the whole of the lines are the lines that we have for the impulse moment is that the track-hits moment is a very high speed. And we have seen that for a point-delete for a point-mass. So the track-hits moment is a very high speed. But the constant effect in the track-hits moment is a very high speed. And I can't even imagine that I can't even imagine but I can also imagine the way it is. And the mass is a very high speed. And a massive step. So it is not a very high speed that is very high. There we are. The track-hits moment is a high speed. The total mass we have. But the track is a very high speed. Here we have the room of all the loads in the band. So the band is all mass sitting and out of the band. So it is possible to find a very high speed with a lot of pictures. So as a massive impact, it is so straight. Then the track-hits moment is a very high speed. So the track-hits moment is a bit of a rise from our world. So it was a high speed when the track-hits moment was now not very high. The atmosphere was very clear that the atmosphere was not very clear. That was not the way it was. It was not clear as a track-hits that was a huge impact in the atmosphere. So this is the atmosphere of the atmosphere of the atmosphere of the atmosphere. So it is not a massive increase. So I can do myself for rotation and for rotation. Then I look at the track-hits moment and I will take a look at the track-hits moment. So the track-moment moment is a form of memory. In the end, in the end, in the tight. So this is the second wets from Newton. This wets here, which is a slightly more or not of, but it can be easily developed from two wets from Newton and the other wets from Newton. So it is also from the other wets from Newton. Yes. The taxi is a min-re taxi. So the flight is with the flight to the table and the table is going to be very hard. The taxi is going to be a bit more of a budget. So if you are a driver, and you are a bit of a mass, you are going to have a look at the track-hits. So from two wets from Newton and the third wets from Newton, you can also have a different view from the definition of the flight. Now we are going to look at how that is made out of the system. The system is going to be very much to make a decision on a long-term view. Then at the moment, the door is not connected. Then we have here the track where we have the wheel drive. There is a arm-off-fast. And there is the big view. I can't see it so much. This is going to be really big, this is like small area. I want water box to be lower. This is beautiful. The wind doesn't scatter. So, this is the arm we have. Now, the sword is in this direction. Now, this is the forehand side. Now, I'm going to take a bow for the forehand side. Here we have the forehand side of the bow. In the right direction, it's now the right moment that it's right. The right hand is in the right direction. It's in the right direction. I see what the right hand is. The right hand is in the right direction. The right hand is in the right direction. So, the right hand is in the right direction. So, when you look at the forehand side, it's in the right direction. So, in the right direction, in the right direction. As soon as the system gets in the right direction, the sword's cut, the sword is cut in the right direction. which makes each person understand that after the이는5Don 나왔 this is the right subject except for your source shortly which is the most tailored and that's the correct moment that's starting to happen. So the correct moment, the three of us are who are at the point that they are here in the city. So that's the correct moment for both of us to see the right moment. Yes, you're correct. You're correct or wrong. It's not a very strong, strong, strong point. In this case, the definition of the correct moment is to understand the right point, which is not a point where you sit. And then the correct moment, the correct moment, the correct moment, is to understand the right point, which is not a point of view. So the correct moment, the symbol is also a small, small, big, small, the right point for the right point of the right point, which is also a point in the right point. So from there, that the first one that's in the right point, and the correct point is to understand the right moment of the correct moment. Now, as we're now looking at what's going on is here, we have the second one for you to see the right point. The correct moment, the correct point, which is for the right point, the correct moment of the right point, it's to understand the right moment, the correct moment, the right point. So if we look at the right point, then we will see the correct point, so it's the correct point. Just note here a form ring, too.iceless now. Actually a form ring is a problem which is premarizing into 3. This we can direct out, in which clients suggest that we spend more rent. selection algorithm. I have input A should healthy, simple to audit. So I'm bound toMaster normal find. This is the last example of Cias in class. 328. Now to hear the reports your signal is received. As you can see when the chants in a powered version of the operating technique would use such a high-brake pressure as much as they are. Next to the front chassis I want in the wind, and when it says, when they describe how they decreased consumption then the perception of a person was Muskul, when he reached the level ofiałem becomes a love, then the person before and when he successfully that we look at, in the moment, in the time, to begin a role. But I could now forget about the end of February, Van Uld mono, and the definition of the small moment, which is then called mass, because the truth is constant. But the idea we had here, is the DL, which makes the T, which is called a black moment, for the two of us to go to the youth. And this black moment, then we have to go to the field. Through the moment we had, and the moment we had, we had to go to the field. But the truth is, and that we had to go to the mass, from the field. But the... the... the... from the field. But the truth is, and the truth is, we had to go to the mass. But the truth is, it's not that simple, but that's a simple thing. And what we have here, is that the... the gravitational... signal, because the arm is connected, it's the big, and the truth is, but how does it sound? Now we can go to the field, and we can go to the field. The... gravitational signal is for 10 meters per second. And we are going to see, now the distance here is the center of the field. This is the distance, that's about 10 meters. As we are going to see, the distance, we can also go to 4-3 meters. 3-3 centimeters. So... we can go to 3 meters, and then the hook is now here. The distance is here. We are going to go to 4-3 meters from 10 meters per second. And the hook quickly comes here. This is also 10 meters per second. Now we are going to go to the field, in this area here. We are going to see, that the process is fast. The distance is 10. I have to say, the second cut is 1. I have now gone. That's a bit hard, but I don't think you are going to find it. But I still have 1. I have to find 1 cut. I have to find 2 cut. I want to see, that I am going to dig. So I figure you guys find 2-3 cut. We can think about 1-4 Straight to be better. That is the strongest area, but that was really usable for 6 years, but actually that's different from it was just now. caused a !!何. What is a break too ? This습니다 the process goes Despite something indecent decrease impulse to do this 20 break So if the treat is rebuilt. In any case, the führen is over. Right, you get 1 radial p r ss, ok? It's going to be 1, 2, 3, 4. 4, I have to make the subgroup important. ? 42 He went early. 0 17 25 20 26 40 400 21 62 38 72 71 52 32 71 71 21 certainly have fascinated me and can. For example, some of the recent projects about this building, talk about it actually, in my fiction posting, I invite you to look at the latest performance. You can never manually forget about this yet. But I can't imagine this beautiful, much paintful. From the setting of design... of work, that's what all science, all these innovations Farmers, B Mac or Mac, takes a lot of effort Agreements that I've grouped to do are still upto the right colors. It's going to take a closer look there. If using a pond where you call it the right black detail, it's also going to look for a saw How to handle it. You may shift it on to some extent, which can include a short información. It's usually used to turn the horn alive. Which doesn't have亮 orירal effect. that we are given and innovative, but we will consider that one thing is not working. Unfortunately, I happen to have to actually disagree with his ultimate object. I have to show a bash back. I actually want bubble reading, which still makes sense. Well, you can choose a först suckering." "Machine Learning: Building a Simple Classifier Using Histograms","https://www.youtube.com/watch?v=gAYlpfe8GwE","In this video we're going to discuss how to make a simple classifier using histograms. Imagine that we're trying to build a classifier that is going to classify tomatoes, the object at the top, versus 10 jr. the object at the bottom. One approach to do this is to create a histogram for each class. Here on the x-axis we have a feature for example the weight in grams. We can see that x-axis is divided into bins. In the leftmost bin we have 1 tangerine and we have 0 tomatoes. In this highlighted bin between 60 and 70 grams we have 1 tomato and 2 tangerines. Now my question to you is if you get a new object that falls in this highlighted bin, what class would you predict for this object? Please pause the video to think about the answer. Alright, welcome back. The answer is since this bin contains more tangerines than tomatoes, we will predict a tangerine for this bin. This is exactly how the histogram classifier works. The rule is for each bin we count how many times each class occurs in that bin. We assign each bin to the majority class. In this case we observe two tangerines and one tomato. So tangerine is the majority class and so we classify this bin as a tangerine bin. We can also do that for the other bins in the histogram. Observe that the pattern matches our expectations. We see the most tangerines and since we have more tomatoes on the right we're going to classify the bins on the right as tomato and the bins on the left as tangerines. But there is one problem. Here in the bin between 70 and 80 grams we have 2 tomatoes and 2 tangerines. What should we do in this case? Well, one way to break the dye is to flip a coin. If it comes up heads we predict tomato and if it comes up tails we could say tangerine. So far we have just one feature, a so called one dimensional classification problem. In this case we had 8 bins per feature. Now my question to you is how many bins do we get if we have 2 features and we use 8 bins per feature. Please pause the video and think of the answer. Okay, welcome back. If we have 8 bins per feature then we have 8 bins along the first feature. This means that we get 8 types 8 is 64 bins in total. We also observe a problem. Many bins are empty. Empty bins have zero objects for both classes and those are tight. We cannot decide what the majority class is. To determine the class we can flip a coin but this is not a very satisfying solution as we will probably make many mistakes. There are two better solutions possible. Please pause the video and have a thought about it and afterwards we discuss the answer. Okay, let us discuss the answer. The first solution is to try to get more training data. If we have more data, less bins will be empty and this we have to flip less coins. Another solution is to use bigger bins or fewer bins. For example, here we can use 2 bins per feature. Now we talked about a 2 dimensional classification problem. But imagine now we have 5 features. How many bins do we have in that case if we use 8 bins per feature? Please pause the video and afterwards we discuss the answer. Okay, the answer is 8 to the power 5. We have to multiply because this way we get the combined bins. As we can see, this is quite a large number more than 32,000 bins. This means we now need at least 32,000 samples to fill all the bins. And that is assuming that each object falls into a new bin. There's more features we have, the more empty bins we will have, and the performance of our histogram classifier will suffer. This is called the curse of dimensionality. This is one of the downsides of the histogram classifier. Okay, let's summarize this video. How does the histogram classifier work? First, we make a histogram for each class, and then for each bin, we count the number of classes, and we predict the majority class for the bin. The histogram classifier can suffer from empty bins or bins with ties. This happens especially if we have many features or if we have many bins. As a solution, we can try to get more data, use fewer bins, or use bigger bins. Otherwise, we have to flip coins to break the ties, but this is something we want to avoid, because in that case, we will probably not get a good performance with our histogram classifier." "Power Grid - Transmission & Distribution Networks","https://www.youtube.com/watch?v=C9uVjHgTD14","Welcome to this clip. My name is Marie de Cote and I will explain the difference between transmission and distribution networks. Then I will discuss how we can solve these electrical networks and overcome their challenges. But first, let us analyze the classification of electrical networks. There are two types. On the one hand, we have the transmission network as you see here. You can see the big producers and the high voltage network that transport the electricity from these producers to cities or factories. And then it goes into the distribution network. In general, an electrical distribution network is a low-fooled network that is more localized. Think about a city or a part of it, or a small group or small industries where only a limited amount of power is consumed. It is to be noted that the models of the transmission and distribution networks are different. Moreover, these networks are stand alone, so we have a separate simulation for transmission network and the distribution network. Due to the penetration of renewable energy sources, like wind and solar, the goal is to couple the transmission and distribution networks and to do the simulation on this coupled network. Now let us focus on the main differences between these two networks. A transmission network has more of a mass structure which means that the cities through which the electricity flows are interconnected and sharing reliability. On the other hand, the distribution network is more localized. For example, the street or neighborhood of a particular city. Here you have one connection point and energy is stepped from this point and transported downstream. This is termed a radio or tree-like network structure. Both transmission and distribution networks have three phases. But in the transmission network, they are balanced and in the distribution network, they are unbalanced. The consequences that for transmission network, it is sufficient to simulate only one phase because all phases are in balance. The other phases will simply be a phase shift of the simulated phase. Due to its unbalanced nature, you have to compute all three phases separately in a distribution network. The responsibility of these two separate networks lies with two different utilities. In a Netherlands, the transmission network is handled by only one transmission system operator, a diesel utility called tenet. In contrast, the distribution network is the responsibility of different parties that are beyond their or stayed in. Also, no and as distribution system operators or diesel for short. It is to be noted that the diesel and diesel operate separately. This can lead to problems such as limited information sharing due to privacy issues. Hence, it is not easy to simulate both the networks in a coupled manner. So, what do we need to simulate these networks? We need some kind of so far that can simulate distribution networks transmission networks and the coupled network. It is evident that a reliability of the network is the goal of the simulation. Different properties of the network are used as input to the so far. Such as the voltage, the resistance of power lines, the generated power and coupling of the different lines. Note that in all nodes at each instant, the energy conservation property should be satisfied. In the end, we want to have an optimal operation of the different networks. We already have such a silver for the individual networks, but how can we simulate both with networks together? There are two approaches we can use and let's start with a decoupled one. The electrical transmission and distribution networks go separately, separately into their own solvers and then iterate to know what happens. For example, the information from transmission network is sent to the distribution solver. Then the distribution part is solved and the obtained solution is sent back to the transmission solver. This is done a couple of times until we get a certain convergence or in other terms until the solution is satisfactory. The other option is the coupled solver. Her simulations are done together. Here, both the transmission network and the distribution network are described by their own models. Those are combined in one big problem and then we have one common solver that tackles the problem. Hopefully, the solution is the same as what is done in a decoupled solver, but the paths taken are different. There are certain problems associated to solving coupled transmission and distribution networks. Let us analyze those. Firstly, if we solve both the network separately, we get a unique solution for the distribution network, the transmission network and the coupled network. Remember that it's not a linear problem, which means that we cannot add the results together. Instead, we have to do that in an integrated way. Another problem encountered is that there are many equations. Questions such as, can we use them? Do we need to use them all? Or can we only partly use these equations arise? When we discuss the difference between the two networks, we mentioned the differences in the balancing of the networks and needed to simulate only one or all three phases. This leads to having two different simulation techniques. We must determine if it's best to do decoupled or coupled simulation, both for the transmission and the distribution grid, and also, how can we combine them accurately and efficiently? Next, we have a very crucial problem. By law, it's not always allowed to transmit data from a TSO to the DSO or the other way back, which makes it hard to solve a coupled system. This, sometimes we can use the best or the most optimal solver because we do not have sufficient data available. This also means that it's not only a problem for the mathematics or the physics point of view, but it also becomes a juridical and ethical issue. In the future, the interaction between these different network owners should become possible. Finally, if insufficient data is available and a coupled simulation is required, we can still do it, but it becomes more challenging to coupled the two systems. Let us summarize. We explained that the transmission grid is matched, balanced and requires only a single phase simulation as opposed to the distribution grid. Also, both parts are handled by different entities. Then, we saw that it's needed to couple them in the future. For this, two methods can be used, a decoupled and a coupled approach. Finally, we have seen that there are challenges when coupling the two networks. These include the non-linearity of the solutions, insufficient data exchange, and a different type of simulation needed for each network. Thanks for joining today." "The Physics of Flight (Part 2): Thrust and Drag","https://www.youtube.com/watch?v=z1v22VPcOyE","This is the second lecture on the physics of flight and we looked at the lift force in the previous lecture and we said there's a price to pay for this lift force and this lecture we will look at the horizontal forces, the drag and the thrust force because the thrust force is how we counter the drag force. So with force act upon a plane, if in plane is flying constant altitude, constant speed, then we have three forces act upon, we have the aerodynamics force pointing up and backwards, we have the thrust force pointing forward to counter the backward component of the aeronelics and the weight of force which needs to be countered with the aerodynamic force and the lift components. These three forces are often drawn as four forces because we use two different components of the aerodynamic force and use the direction of the speed to decompose it. So everything perpendicular to the speed or the horizontal flight, this is also the weight, the lift and weight are forces perpendicular or orthogonal to the direction speed, but in the direction speed we have drag and thrust and we look at drag and thrust in this lecture. For a constant altitude and constant speed, that is to be equilibrium, lift is equal to weight, thrust is equal to drag and in a more physical way we then write, L is WTSD. But remember that the drag is a result of the lift also. So these are connected, so lighter weight is less lift, less drag is less thrust, is lighter engine, maybe less weight and so they are all connected and there is a lot of cycles in there. Let's look at the lift force again, we have this formula for the lift force. So is there a similar formula for the drag force? The drag force is a component of the same force, so the formula is indeed also very similar, it's actually the same formula for the drag force as for the lift except we have the drag force D and we have the drag coefficient CD. Why not use the frontal surface of the wing or remember that it's a component of the same force and the lift is a important factor of this drag, so therefore we use the same surface area as seen from the top view of the wing for this equation. This drag coefficient is also a function of the angle of the take and the drag is the price to pay for the lift. So what is the ratio between this price and the gain to lift? If we look at the formulas, we see there almost equals so the ratio between the two means that if we divide the lift over drag, then we get actually just the two coefficients, which means independent of altitude or speed, this ratio is a given for a certain angle of take a certain flow of certain shape. If we make a graph of this, so what is the amount of drag that we get per lift that we see it actually increases quadratically with the lift coefficient. We also see that the number of the drag coefficients is way lower than the lift coefficient. This is good news because the drag is the price you need to pay. So we get more than we pay. If we take this one points illustrated here, we see that we get the lift coefficient of 0.60. We only pay with the drag coefficient of 0.03, which means that the difference the factor between the two is actually 20. So our drone is not at all correct. If we would draw it to the right proportions, it would more than like this. 20 times as much lift as the price we pay. It's not energy out of nothing because the aircraft doesn't move in the vertical direction, so there's no work done by the force because energy the work done is force times distance traveled and we remain the same altitude. But it's still impressive, 20 times. Part of the reason of this is the wind design. We have become better at wind design over time, so that's where we can achieve this high lift of drag ratios and actually a thicker wind is better. So you see one of the developments over time is thicker wings. If we look at the red show of drag per lift a bit closer in a mathematical way, there is a formula for this. It's a quadratic formula. We call this the drag polar. So it's a zero lift coefficient plus a constant times the lift coefficient squared. If we look at this constant bit closer and I'm not going to details, but I just want to point at one character that's in there that's the character A and that's sense for aspect ratio or the slimness of the wind. This means that if the wind is a more slender wind, then the price that you pay for the lift is actually less. So if we look at some existing aircraft, the Boeing 747-400 has effector 15 times more lift per drag. And the example of 20 is actually the value for the A380, more slender wings, better lift over drag rates. In fact, if we look at gliers, they have extremely slender wings, then the factors even 50, 50 times as much lift as you get drag. So you can achieve a lot there. So there's two tricks to generate a lift for a low price of drag. Slender wings and thick wings with a very good shape to generate lift. But still there's a price to pay. And this rate show, the first of drag rate show, gives us the price to pay. By the way, this rate show is also the same rate show for the amount of distance you can still travel for a given altitude. So in 747 if the engine is filled, you can still apply a 10-kilomins altitude, you can still fly for 150 kilometers. But there's a price to pay. How do we pay it with the frost? So how is this frost generated? If you look at the right flyer, the right brothers had the problem. They needed an engine of which all the engine manufactured around the world said it doesn't exist. So the right brothers said, well, then we have to build it ourselves. And so they build their engine. In fact, the same can be heard nowadays for a lot of new problems, new engine for new propulsion forms. But be aware that in 1940, when the jet engine, the Gesterbein, was new that all the experts that existed around the world were assembled in the Gesterbein Committee in the US National Academy of Sciences. They said that even with all improvements possible, it would always be to have it in the airplane. In the same time, they were already doing experiments with it. So you see this often that some people say, it's not possible. And even physicists say, like, what Kelvin say, it's not possible to fly or these guys were also physicists, it's not possible. But be aware that often they're thinking about this way. It's not possible in this way. But there has to be another way then. Right? Let's look at this jet engine then. How does it work? Well, there is a core part which compresses air, burns fuel, and then the air expens at the higher temperature. And this is the core principle to generate lift because if you expand at the higher temperature, it gets so much more speed that it actually can cause the first propulsion. There's also certain amount of air going around this whole section. And if you look at today's engine, actually that part is way larger. It's called high bypass rates, or larger amount of air, cold air is going around to the whole section in the center. Why is this? Is this to make them cleaner, quieter or simply to save you? But therefore we have to look at the physical principles. And we have to understand a few concepts. Let's look at this poor guy pushing his car. He exerts a force at the car. And in this way, he gives the car energy. And when over time he has performed a certain amount of work. But per time unit, he has to have a lot of power to do this, in this case, because his car has no power. So using this concept from the jet engine, we can explain a lot about jet engine. The propulsion is generated the first version generated by accelerating air that goes through the engine per second. So we have a mass flow of certain amount of air that goes through the engine, which I get releases with a higher speed. And by changing the mass and speed, the product of those two is called momentum, by changing the momentum, you can create thrust as a reaction. So the thrust is the mass flow times the speed difference. If you look at power, power is the amount of work done per second. And work is four times distance. And this is the second speed. So by multiplying the force with the speed, we can get the power available for propulsion. And this is the mass flow times the speed difference is the thrust force. So multiply it with the speed of the aircraft, which is the same as the speed of the air coming in. We can calculate the power available for thrust. What we have to pay is, of course, the increased kinetic energy of the flow. The kinetic energy is half and fii squared. And so per second, the amount of change per second, is the mass flow of m fii squared. So if you maybe you remember from high school, half and fii squared as a kinetic energy, then you know that this is actually the difference. What I'm getting is I want to know the efficiency. So I look at the gain per cost. And the gain is propulsion power. The cost is the extra kinetic energy in this case. And we have to arrive this formula for that. And by doing some manipulation of this formula, including brackets, writing the square in a different way, and deleting parts on the top of the bottom side of the stress you're at the same. One important step is that actually the mass flow drops out. And the equation for efficiency of the jet can be written like this only as a function of the speeds entering and leaving the engine. And if we look at this equation, we see that jet efficiency is therefore given as two divided by the ratio of the speeds was 1. And there's no mass flow in this. Unlike the thrust force, whether it's a mass flow in there. So if we want an high efficiency, then the top equation has to be a high number. So the lower part has to be a lower part of this fraction has to be a lower number. And therefore the V out has to be low. But we needed this V out, the high V out to generate thrust. But if we have more mass flow, it's causing cost as any efficiency. But less V out, we can still generate the thrust we want, but with the higher efficiency. In other words, it's better to accelerate a lot of air a little bit than a bit of air a lot. And this is why engines are getting larger. You see here two different types of a 320, a 320, a 320, a 320, a 320, actually a larger engine. And the same for the 737, the 737 max has larger engines, which meant that put it slightly forward because they wouldn't fit under the wing anymore. Summarizing from aerodynamics, we learned that yes, you can generate lift with the wing, but it's priced to pay with drag. And if you do it efficiently, you get 20 times as much more left, lift as drag or maybe even 50 with the slender wings. And if we look at the cost we have to pay, we can count to that with the thrust. And then accelerating a lot of air a little bit is the most efficient way to do so. And that's why engines are ever getting bigger. I'll be pretty little bit of insight in the basic physical principles of light. This is the end of this second physics lecture." "Sorting Plastic Waste - the Polymer Fraction","https://www.youtube.com/watch?v=FB69ydQacaQ","In this video, we will take a closer look at how the technologies that recyclers use to short plastic waste work. Sorting technologies either rely on the differences in physical properties between polymers, so it has a density or a lack of static properties, or they use optical instruments that are able to distinguish different polymers. Of the combination of methods is used to achieve the desired result. In a sample of a method that uses differences in material properties is sink float density separation. In this method, mix polymer waste is introduced into a floatation buff filled with water or another lick this. Because of the different densities, lighter polymers such as polypropylene and polyethylene will float and can thus be collected from the surface. Whenever you press text like polystyrene or IBS, they will sink to the bottom. By using relatives that change the density of the floatation liquid, different types of polymers can be sorted in this way. However, the effect of the ZF of this method is limited by the fact that different polymers have density ranges that can overlap on one other as shown in this graph. A float separation of polymers with overlapping density ranges would require additional technology that makes the floatation process more precise. A proverse in, but not yet widely used method is called selective floatation separation. It makes use of a difference in hydrophobicity between polymers. Before the polymers enter the floatation buff, the particles undergo a surface treatment that selectively changes our visibility. This means that the air bubbles are institutes from the bottom of the filtration buff, they will attach to the more hydrophobic polymer fragments, this will then begin to float a bird. The more hydrophilic particles will have a completely wet surface and therefore won't start floating. The characteristics and limitations of these type of density based swarming methods have clear implications for design for recycling. First of all, permanent connections between different materials should be avoided to prevent polluting one or the other materials stream or making separation impossible. Also try to avoid additives, coatings and polymer blends. This can change the density of the polymer to the point where it will be misidentified. Again potentially polluting other polymer streams. Further, electrostatic properties can be used to separate polliners. This is known as tribal electric separation. First the plastic particles are shaken in a charging unit and with their replicances other, this causes different polymer particles to be discharged. A conveyor belt then guides these particles towards an electrostatic field that deflects the different discharge particles into different containers. There are also technologies that use optical instrumentation to short streams of mixed polymer particles. A commonly used method is near infrared spectroscopy. It is a relatively fast method that doesn't require surface refinement of the particles. It is based on differences in absorption of near infrared light by different polliners. After the plastic particles are illuminated and near infrared sensor detects the reflected wavelengths. With this information a processor unit determines the polymer type and sends a signal to an add yet, this blows the different polliners into separate containers. A limitation of near infrared spectroscopy is that it doesn't work well with dark cores, especially black since these absorb nearly all light also in the near infrared. It can also run into problems, then labels or coatings cover the surface of a material which will cause a false reading by the sensor. In addition to near infrared, there are other optical methods that use x-ray or laser technology to distinguish between polliners. After shorting, certain flakes will remain in determined either because they are too contaminated or because they were not successfully identified. This residual stream will most likely be is incinerated for energy recovery. In a circular economy, this is considered a leak in the system. However, in some cases, mixed classes can be applied in products of relatively low quality. But this is to be avoided too, as it implies that the product is made with ill-defined composition. This makes its future processing after a next-life cycle very hard. A related design consideration is to use common plastics only. Common polymers, success polyethylene, polypropylene, ABS polycarbonate and polystyrene are already being recycled as scale. Therefore, they are much more likely to be recovered than uncommon plastics. These are retrieved in volumes that are too small for viable recycling. These less common polymers are likely to enter being incinerated as well. Knowing how different sorting technologies and the principles on which they are based work helps to understand why certain design choices can improve or decrease their recyclability of a product. Understanding the basics of sorting is essential to make proper choices for recyclable materials and designing a product." "Recycled Plastics - Reprocessing and Properties","https://www.youtube.com/watch?v=j9ZhXLDhvgU","The cycling material only makes sense if you can apply the material in new products. In this video we will discuss the properties of recycled plastics and how and why they differ from version plastics. To do so, be first have to understand how recycled grates are created. This start during the recycling process. Once a polymer fraction has been sorted into separated streams of monomaterial slakes, the next phase of where recycling begins. Reprocessing them into useful resources for manufacturing new parts and products. If the flakes are of sufficient purity and quality, they can be washed and grinded and be directly re-applied into the injection molding process. Yet more often the recycled material does not meet the desired quality requirements. This properties must first be improved by blending in additives or virgin polymers. This process is called compounding. Compounding is necessary because recycled plastics generally do not have the same properties as virgin plastics. There are several reasons. First of all, the same base polymer can contain different additives, like plasticizers, pigments, fillers, flame retardants. And recycling all these additives are mixed, resulting in a less defined material. Contamination still pleasant on the recycled plastics can further reduce the purity. Furthermore, the polymer properties might have changed during the product's use phase. Plastics that are recovered from pollters that have been used for multiple years will have be graded over time. This can include mechanical degradation, leasing of chemicals over time, and oxidation due to, for instance, UV exposure. Also processing from previous manufacturing will affect the quality of the recycled plastic. Firmal degradation might occur during melting, as well as mechanical degradation due to shear forces on the polymer. Now, how exactly do the previous life cycle enter recycling process affect the properties of recycled plastic? There are a number of issues that cannot occur. An example are impurities that increase the density of recycled plastic negatively impacting is stiffness and weight. Impurities in recycled propylene for instance may increase its density and thereby weight of the final products. The other problem is that degradation can cause the polymer chains to break or crossling. This will decrease the material's tensile strength and elasticity. It also means recycled plastics generally become more brittle, resulting in a lower impact resistance. Because of this, a product in this first plastics are simply replaced by their recycled plastic equivalent, maybe more easily damaged by shock. As recycled plastics have undergoing repeated processing, fatigue might occur earlier in the life cycle of a recycled plastic. Then compared to a virgin product. Furthermore, there is the impact of thermal and mechanical degradation on the viscosity of the material. As a result of recycling, the plastics melt flow rate is lower. This can complicate the injection molding process. Finally, keep in mind that if the waste plastic has been in contact with organic waste, the resulting product or part may have an undesirable smell. By compounding the recycled material, the properties can be altered. As to meet the requirements for a standard recycled grade or for a specific application. During compounding, edatives or virgin material can be added to recycled plastic. Mechanical properties can be enhanced with thermal stabilizers, UV absorbers and impact modifiers. A static properties can be improved by adding pigments. Generally, light or bright colors are more difficult to achieve in recycled grades than darker ones. Yet, in some cases, meticulously sorting the plastic flakes by core can already produce good results. Another aesthetic limitation is the quality of the surface finesse. As contaminants make it more difficult to create high-class surfaces, be a recycled plastics. A possible solution to this problem is to turn it into a feature by deliberately designing protective surfaces. Finally, keep in mind that if the waste plastic has been in contact with organic waste, the resulting product or part may have an undesirable smell. It is important to note that edatives may themselves make the material more difficult to recycle in the future. Therefore try to limit the use whenever possible. This possible try to avoid using glass fibers as a filler, and this will pollute the recycled plastic and the eminence is mechanical properties. Alternatively, try to use carbify bar or mineral-filled polymers. After compounding, recycled plastics are extruded and granulated, so they can be used for the production of new products. Because plastics are so versatile and are available in countless different grades, it is hard to specify exactly how a recycled grade will perform compared to a virgin grade. In contrast to virgin plastics, the properties of a recycled grade may also show some variation between batches, because the composition of the waste plastic is a source from will differ over time. This means that design of product-reversible materials should be more robust with respect to some variation in material properties. Additional testing and trial and error may be required to build confidence in the properties of a recycled plastic before you can confidently apply it in the part or product you are designing." "Rethink The City But with Care","https://www.youtube.com/watch?v=xw_WZNXi3NE","Hello. In this course you'll be learning about urban development from examples in many countries. But I would like to start with a brief note of caution. Cities routinely borrow models of urban development from elsewhere, but a cut and pace where of learning from other places has often caused more problems than it's solved. I think the case of Bangkok. For centuries, Bangkok followed a traditional model of urban development, organised around the principle of living with nature. People built their city and organised their lives around the water. From the 1960s the city began to adopt American models of urban development and tried to control natural systems. Ancient canals were filled to make way for highways and urbanisation. This has made the regular flooding worse and not surprisingly, it's the poorest communities that the most badly affected. I'm not arguing that learning from other countries is a bad idea. One of the best things about working here in Delft is the opportunity to share ideas with staff and students from many countries. In this course you'll be learning with others, a more than 130 countries. But we need to take care about how we capture lessons from other places and how we use examples of urban development in our own cities. The way that cities work is not like an airplane. We expect the physics of the airplane to be universal. We build it in the same way whether it's landing in Amsterdam or Manila. That doesn't work for cities because they're rooted in a place. And places have very different geographies, economies, histories and cultures. What works in one place may cause problems somewhere else. The current fashion for smart city solutions is a good example. For rich cities, smartness is about making use of new high tech solutions. How relevant is this for cities where basic services don't exist or are not fairly distributed? These cities need to find their own meaning of smartness. So how can we learn from other places? Two things. First, we should try to separate the universal from the particular. That means understanding what principles can and should be applied everywhere. And what are the solutions that need to be formulated locally? Sustainable urbanizations are good example. There are universal principles that apply in any city. For example, the need to reduce consumption of non-renewable resources. But the ways that we implement this principle needs to be decided in each city, taking account of local conditions and cultures. Second, we should be modest about our knowledge of other places. Finding a good reference project tells work can be very useful. For experiencing other ways of looking at a problem and for generating ideas about solutions. But do you know enough about the project? Do you know enough about the place? We need to investigate the historical context and local social, economic and cultural conditions. The problems of cutting-paced solutions are not just abstract ideas for academics. They have real concrete impacts and consequences for communities. They should inform the ethics of planning cities, especially for those working internationally. Make the most of this opportunity of learning about rethinking the city from the examples you are shown. But concentrate on finding the principles and then work on how to apply them in your own city." "The Global Search for Affordable Housing","https://www.youtube.com/watch?v=DxfKGwt71l0","The search for new models for affordable housing in the world's growing cities has never been more urgent. Goethe affordable housing is needed to confront the still accelerating growth of urban populations and the challenges of growing urban inequality and segregation. It is necessary to make it possible for those with little or no means to access and inhabit the cities that have the promise of providing them with a better future. By 2050, two out-of-three human beings will live in cities. In other words, over the next three decades, 2.5 billion people will be added to the world's urban population. The rate of urbanization will be particularly dramatic in today's low and middle-income countries. Nearly 90% of the increase in the world's urban population will be concentrated in Asia and Africa. The phenomenon of rapid urbanization coupled with widespread change in demographics and the effects of the global climate crisis bring with them many challenges to people's livelihoods, to the resilience of urban communities, to the environment, to the physical and social infrastructure of the world we live in. A key question is, how we will address the urgent need for adequate and affordable housing in the rapidly growing cities of the global south. A question that is, however, just as urgent in the world's high-income countries, cities like Amsterdam and London continue to attract newcomers, creating new scar cities and unprecedented churches of house prices, making these cities for many an unaffordable place to live. To make sense of the challenges we are facing today, we need to develop critical accounts of experiences and developments from the past. Before the radical global modernization of the last century took off, people created their living and working environment in a slow process of innovation and adaptation of traditions of building and the formation of spaces for everyday life. In the last hundred years, many solutions to address the need for housing, focusing only on predictions, speed and numbers filled and are still failing. For the creation of an accessible and meaningful habitat, many more aspects have to be considered. The profession of good housing has always been a highly charged political and economic issue. However, the design of housing has to be considered an essential aspect as well. The design determines the possibilities of the inhabitants to connect that well into the demands of the everyday life. The design plays a key role in whether the provided accommodation will be sustainable, before the possibilities of good maintenance and usability and adaptability over time. The design, in other words, is a crucial factor in allowing citizens to build an urban future that is connected to both their ambitions and their realities. As a practising architect, I was in recent years involved in the TEMP's mid-regioneration project, its southeast Lombard, a project in which the outcome of ideals from the past and the realities of the present regarding housing design had to be connected to a vision for the future. As one of the great metropolis of the world, London is a city where the lack of adequate housing for those with little or no means and the search for solutions has a long-assignating history. The slums, where the working class of 19th century Victorian London had to live, have been immortalized by offers such as Charles Dickens or artists as Gustav de O'Reix. One of the first attempts to address this issue was the foundation of the P-Bordi Trust in 1862. Made possible by a very substantial donation of the American banker and philanthropist George P-Bordi. The Trust started the campaign to build solid housing to replace the overcrowded and disease-plaked London slums. The Trust developed a financial model based on the principle that the investments you bring in a modest revenue in order to be able to continue the Trust's activities and indeed, they go on till today. The first architect, appointed by the Trust Henry Darbysher, developed a standard of the so-called P-Bordi building that would be used for more than 40 years. A very robust and well-detailed design allowed for the variation in the size of the dwellings and minimization of the maintenance costs and the flexibility in clustering the buildings, so projects could fit in in the mostly irregular sites within London's organically grown urban structure. The Straititi Proof Successful Till Today The P-Bordi building still stand firm in the London townskate and are now a desired place of residence for many. The private philanthropic P-Bordi Trust realized many projects, but the need for affordable housing in the continuously expanding city remained a constant challenge. Around 1900, the municipal government of London started to address the issue of providing housing as well. The protests became once more a very urgent challenge after the end of the devastating second world war and was energetically addressed by the architects of the housing department, of which was then named the GLC, the Greater London Council. The GLC had by then an impressive history of realizing housing estates, many of them designed to replace the inner city slumps of speculative Victorian working class housing. In the mid-1960s, the GLC initiated the setup of a program and the design of a new town for 6000 people in East London called TEMSMEET. The project intended to be the showpiece of the then current ideals and concepts for affordable housing, or as it was better known than comes to housing. TEMSMEET was planned to become the ideal new town with a variety of housing technologies as well as a generous provision of space for employment, schools, services and recreation. The original 1967 master plan shows a large number of long-meandering blocks the so-called spine blocks that formed the backbone of the district, a long string of buildings along the main access routes, conferredging in a large central area around the marina by the river the spine blocks created a wind and noise buffer for the low-rise neighborhoods lying behind them by connecting all housing and other strategies. With the first for pedestrian access deck, TEMSMEET turned into one vast mega structure of interconnected townhouses tower blocks and amenities. Only the first phase of the ambitious master plan, TEMSMEET self was realized. The complex and brutalist design gave the spine blocks a sculptural and unique expression in a repetitive pattern of staggered short blocks, single family terrorist houses were clustered around parking courts and collective green spaces. In addition, series of residential towers accommodating single or double households were situated along the fringes. The spiked all the ideals and good intentions of the designers and planners, TEMSMEET can figure as a model for the short-lived future. In the early 1970s, the prohibitive construction costs and changing fuels on housing design led to drastic changes. As the original vision for the area was abandoned, TEMSMEET quickly gained an unpleasant reputation. Due to economic reasons, the promises of a large shopping area with a marina, new transport links to the city and the bridge across the river TEMS were unfil filled, leaving the place in an isolated position. The experimental prefabricated concrete housing construction systems failed, causing technical problems as leakages and air dropped. TEMSMEET's selves interconnected walled ways and elevated living spaces resulted in an neglected ground for area without surveillance. This led within a few years to a spiral of decline and the estate served as a rough urban state setting in various movies such as Stanley Kubrick's A Clockwork Orange. Over time, other occurrences such as the partial transfer of ownership to the individual inhabitants added to its further downfall into a set state of neglect and improper experience. In the second part of this presentation, we will see how the people detressed is giving a new span of life to TEMSMEET's self. An explore how the aspects of housing design, we discussed in this London story, are now being addressed in the very different context of Ethiopia's capital city of Isaba." "Lifecycle of a Building Product","https://www.youtube.com/watch?v=yFwe4Vc1_S8","Hello and welcome back to the MOOC's circular economy for a sustainable built environment. When we talk about the built environment, we mostly do not envision one-month eventity, but rather the compilation of smaller elements and different scales, from material to building to neighborhood and regions. All those scales need resources to be constructed and function throughout the life cycle. Circularity is seen as a way for better managing those resources. Therefore, the built environment offers various opportunities for reducing its impact. In this video, we will discuss the central role that building products have in developing a circular built environment. Products have to be designed in a new smarter way, with longer life span and possibilities to be repaired, upgraded, and even adapted in other systems. Moreover, circular products are taken to account the influence on the other scales of the built environment. To understand how and when to intervene, first step is to look at the life cycle of the product and that is a fundamental and economic impact. In this lecture, I will focus on the life cycle faces of building products from production to end of service. Those life cycle faces are similar in different products. Let's take the brick, for example. Brick is one of the most commonly used product in buildings. It's history dates back to end sometimes, since then the size, shape and composition of bricks has changed, as well as their making process. Let's have a closer look at the life cycle of a clay brick. First comes a production phase. It includes extraction of raw material, transportation and manufacturing. So the very first stage in the brick production is the raw material extraction. Clay bricks consist mostly of clay minerals, mixed with water, potency, also using additives and coatings to enhance the properties of the final product. After extraction, raw material are being transported to the production facilities. Clay can be found locally or brought from further away for quality or financial reasons. The clay is already in location, the term is then the environmental impact of the transportation activities. The next stage is manufacturing. Brick manufacturing includes making of the clay mixture, forming or extruding the bricks, drying, firing in a kiln and finally packaging. In this production phase, the highest environmental impact occurs during firing, and the emissars of binding force fields to fire the open. The packaging process also has a contribution to the overall environmental impact because of the material and machinery used. After production, the next phase in bricks life-shyking is the construction or application phase. After manufacturing and packaging, bricks are being transported to the building side, where they are installed on the building or other from the building itself. Brick's most common function in the construction is the construction of external or internal walls. They are laid in different arrays using connecting mortars. The way they are using the building construction also determine the type and the construction method. For example, facing bricks at their place as an exterior layer have been manufactured to tolerate water. They are denser and have been fired in higher temperature. Hallow bricks on the other hand have better insulating properties, but they are not used as the final surface. They are often plastered or being used at the inside later of the facing bricks. After the construction is finished, the bricks are in place and ready to fulfill their function during the use phase. This phase is the most extended part of the life cycle in terms of time. Clay bricks can be used for 80 or 100 years or even longer. The lifespan depends on weather condition and construction techniques. During their use phase, bricks don't have the right negative environmental impact, since they require very little maintenance such as for example, reporting to repair the connection. Finally, as being part of a building component, the wall, for example, breaks at the end of the service life when the building is demolished. At this stage, the brick construction is disassembled from the wall in loose components. Together with the rest of the product and material composing the building. The demolition has an environmental impact as it once again involves diesel power machinery and results in waste. bricks are part of the demolition waste, are transported and often they are placed in a landfill, as a way to manage the waste. Thus, the negative environmental impact at this stage can be associated with the demolition and transportation. bricks resting in the landfill are harmless. But each this is a sustainable and circular way to handle construction waste. Let's talk about his solutions which can help to close the loops, meaning that the brick, as part or as a hall, can continue or restart this life cycle. One option to close the loop is recycling. The bricks can be used as a secondary raw material after being shredder and employed in various construction activities. For example, they are used for roadbats. In this way, however, the brick is downcycled. At times are made to keep the value of the product and upscale them into new building products such as bricks or tiles. Unfortunately, recycling is at present energy and efficient and lacks proper infrastructure, which makes it less economically feasible. The best option then could be reuse. Brick walls can be dismantled and after the mortar is removed, the bricks can be reused. However, dismantling a brick wall might be dangerous and removing the mortar difficult and time consuming. To avoid that, products can already be designed to easily disassembled and reused. A good example of such a design are the dry stack bricks. The mortar used to join the bricks is replaced by plastic units, which makes the bricks easy to separate and reuse. This approach decreases the environmental impact of the bricks, because it extends the life cycle and reduces the need for new products. To make this possible, the end of use and adder was already considered when the bricks were designed and manufactured to fit the system. So, in this video we discuss the life cycle stages of the brick from production to end of use. Although the life cycle stages are similar for different products, the duration of its stage, particularly as far the use is concerned, can differ. Thus, different products have different life span, meaning, lasting for longer or shorter time. Like, recognizing the importance of the different life cycle stages and their environmental impact will help us to close the loops and move towards a circular built environment. We can, for example, intervene at the end of service scenario by using it as a result of the brick. To make this happen, we need to be proactive and think in advance how to extend the life cycle of the product or recover and reuse the material. Or we can decide to reuse existing components to the manufacturer or new building products. Those decisions are made even before production during the product design. How the building products are designed determines how they can be used to close the loops. Can you think of the life cycle stages of different building products such as windows, doors, internal partition lamps, kitchen, cardboard tiles? How would you intervene to make the more circular think about it?" "The Physics of Flight (Part 1): Lift and Weight","https://www.youtube.com/watch?v=6e4IWW1MPhY","Welcome, my name is Jacques Hoogstein. I'm one of the best of the faculty and also one of the lectures of the introduction to aerospace engineering lecture. And I will give two short explanations of the physics of flight. In the first one we will look at the lift force. It is one of the mysteries of flight. In fact, not so long ago a famous physicist like Lord Kelvin of the temperature unit has stated that it would be impossible to fly with anything heavier than air. He was proven wrong only eight years later by the Wright Brothers who made their Wright Flyer. And then 100 years later roughly one in his either we were able to make even larger airplanes like the A380. And also the way with you flew changed radically from lying on the wing in the wind to watching a movie while you fly from Paris to New York. Something with the Wright Brothers said that would never be possible to begin with flying from Paris to New York of course not. And we do it now regularly. How do we do this? What is the trick? The secret of flight. Well, if you look at a physical principle that actually three different principles. The first one is being lighter than air. The second is pushing air downwards. Action is reaction. And the third is pushing something else down or slightly the rocket. And this is also roughly the historical order in which we learned to fly. The first one being lighter than air invented by the Monkoly Brothers who benefited from the fact that their father had a wallpaper factory that experiments with balloons. And these balloons got larger and larger and larger at some point they were at point that they could put humans in it after testing it with some animals to see if a human would survive if it would go high in the atmosphere. Courts, birds would survive but if the sheep would also survive then probably humans would also survive in the go up. The king who sponsored the experiments, the air for larger balloons then proposed to put some prisoners underneath the balloon to test whether humans could fly. But there were actually two volunteers, Darlande and the O.J. who said well we think it's quite an honor to be the first one to fly. So they volunteered and did this first flight. In fact nowadays hot air balloons are in French called Monkolyers and helium balloons are still called rochers in honor of these pioneers. And this was actually the way which we flew into the 20th century. Here we see the hidden work and which in 1947 made a promotion tour across the US to promote the technology of flying. Unfortunately they were lacking helium then so they used hydrogen but that even provides more lift but is also combustible. So when I spark due to static electricity ignited the paint on the cloth then together with this hydrogen to us, the huge disaster. Was this the reason that we don't fly with balloons and airships anymore? Not really. Let's look at the physics to understand it a bit better. The physics of balloon flight are called aerostetics and the physics are based by looking at the pressure in the atmosphere. And if we take a certain volume of air let's look at this volume of air then we know that due to the weight the specific weight this volume of this volume has a certain weight the air has weight as well. So the S.G. D. Force acting on it. But if this equilibrium it means that the air pressure around it apparently is able to generate an equal force up and therefore the air stays where it is. What the idea of aerostetics is that we take out this volume of air and replace it with something else. So if we take for instance called air but replace it with hot air then the pressure around it is still the same but to wait due to the lower density of hot air is actually less 25% less and the difference is the lift you can generate this way. If we do not use hot air but a gas like helium for instance then we get even 80% 86% less weight because it's only 14% of the weight of the air. So let's a lot more lift and hydrogen weighs half what helium weighs and then you can reduce this difference with the 100% by effect or 2 and therefore this creates 94% less weight the so 94% of the weight as lift. In physical formula we would say that the lift force is equal to the weight of the air multiplied by the reduction of factor because you put something lighter in it. And this will do like this you take the specific weight the density of the air multiplied by the volume to get the mass of this air multiplied by the gravity constant to get the weight of the air and then based on the difference intensity you can calculate the reduction in weight by the gas and therefore this is this factor and these two are the lift formula for ballooning. It is now the way to travel in the future. It's quite luxurious so plain plain yes this is the future of sustainable air travel because if you use helium you get eternal lift for free you don't need to burn fuel for it you get the lift for free. But unfortunately air weighs something but it's not that heavy so to have the lift which has to be equal to the can be at maximum equal to the weight of the air in the location you need quite a large volume to get enough lift and this is not a real problem until you start moving forward because then this volume creates a lot of drag and therefore you still need to burn fuel then to get somewhere. So yes it is a sustainable form to generate lift but only if you're not going anywhere. But there is another mystery about the lift with balloon and that is if we take this helium balloon and you can see that it generates lift and we've investigated why because the helium inside is lighter than the air around it and that's why generates lift but if you look a bit closer this balloon keeps it shape it doesn't change shape and that's because around the surface of the balloon there is at every location and equilibrium of horses and therefore it stays as it is. However if we would some of the differences all these zeros then the total lift for should also be zero but it's not there is a lift for sure. So how how is does it actually work with this force where does this force come from okay it's lighter than air it wants to float but how does it work with the forces? But to understand that we discover the principle which is actually very important for aeronautics in general. The reason is that on the top side of the balloon the pressure is a bit lower than on the bottom side where there's a higher pressure and if you look at the difference then for every meter altitude you would have a 1.225 kilogram per square meter less weight pressing on it when you go higher and it depends a bit on the weather the temperature and but roughly this is the number this is the f-ersnome. If we look at a pressure at sea level you often hear the number 1,325 milibar maybe but it also can be mentioned that it's actually 101,000 new to per square meter and maybe it doesn't mean that much to you yet but this is actually the same as 10 over 10,000 to kilogram per square meter is actually a pressure at sea level and the reason is that there is indeed over 10,000 kilograms of air above you. So because if you have a table it presses on it but it also presses on the lower side then of course you don't notice this immense pressure but there isn't immense force that allows us to do something with it and this is what hints at the second principle pushing air down which because it's also about pressure differences if we look at this airplane the A380 we look at the weight and we look at the wing surface area then we can calculate per square meter what the wing carries the wing load. If you then look at the pressure difference that would be required between the top and bottom side of the wing to carry this weight you could see that this is also 600 kilogram per square meter the wing load and then we can see what actually the total pressure is 10,000 over 10,000 you could see this is almost 6% of pressure difference between the top side of the wing and the lower side of the wing and this is what generates the lift. How does the wing do this? What thing you hear is because the top side of the wing has a curvature more curvature than the lower side and therefore it has to go faster and indeed if we take two sheets of paper and we blow a bit of air in between then you to the higher speed there is indeed a lower pressure and the sheets fall together and this is what generates the lift. However there is a reason why this explanation is often not right there is no reason that the longer way on the top side makes it go faster because they don't want to rise there at the same time. In fact if you look at the animation you see that the top one is even at the end of the wing faster than the lower part. There is no reason that this should be there at the same time. And therefore it's not a correct explanation. The correct explanation is closer to this. If you look at the wind tunnel which has a smaller section in the middle and the same amount of air goes in as it goes out which has to be because where it doesn't go otherwise. Then in the middle it has to go faster because there is less space. And if you then replace the lower side by this curved air force curved wind shape you can see that it's actually the same only there is no top side but the air just goes straight but it doesn't matter for the lower side. So the fact that air is pushed away is causing the higher speed and that's is causing the lift and that's indeed due to the pressure difference that you have with the higher speed. But this can also be calculated in a more physical mathematical way. And therefore if you look at what this lift force depends of the way. Of course it's the speed at which you travel but also the surface area of the wing the air density the air pressure and the shape of the flow in the wing as well as the angle on the widths the air meets the wing. If you look at these quantities and we put them in a formula by giving them names we can see that the lift can be expressed in this way. Lift is the lift coefficient for the shape and angle of the deck. Avril free squared for the density in speed times the surface area. This Avril free squared is what we call the dynamic pressure. It's actually the same as what you would feel if you would hold your hand outside the car window. The half row free squared is the pressure you feel against your hand. And as the lift coefficient is dependent on the angle of deck we can make a little graph of this. And then we can see that there is even a point where the lift coefficient is one. That means that what the wing then would feel in terms of lift is the same as if you would have this whole airplane in the wrong direction in the flow. Feel the dynamic pressure that's actually the force that the wing generates in terms of lift. So it's an immense force. But luckily for us the plane is not oriented like this in the flow but horizontally but there is still a price to pay in terms of strength and that will be discussed in the next lecture." "Emission Reduction: Potentials and Costs","https://www.youtube.com/watch?v=dr71ZAvUyqI","Welcome to the third week of the course, designing a climate neutral world. In the following videos, you will learn how to analyze emission reduction options, often referred to as mitigation in the climate change community. So this first video will introduce two concepts, mitigation potential and mitigation cost. Thus we have two main questions we will aim to answer. Whenever discussing mitigation options, we can ask ourselves, how much CO2 does a certain option mitigate and second, how much does cost beton or mitigated CO2 emissions? These are two important questions, but they are not the only questions that are relevant. In a later video, we will come back to some other elements that also play a role in the decision-making process when discussing mitigation. In this first video, we will focus on the determination of potentials. Let's first give a definition of mitigation potential. So the mitigation potential is simply the quantity of net greenhouse gas emissions that can be achieved by a given mitigation option. A mitigation option can be anything like home insulation, application of wind energy or eating less meat. We talk about net greenhouse gas emission reduction because not only there are options that reduce CO2 emissions, but there are also options that extract CO2 from the atmosphere and some that do both. It is important to always specify the baseline against which emission reductions are counted. We will get back to that later in this video. First, we consider a technical potential, which has two constraints and one of it is all kinds of theoretical limits. There are limits to the size of the land or the size of the earth. There are firm and dynamic limits to conversion efficiencies of all kinds of energy conversion equipment. It is also limit by what is technological feasible at a given moment. At a given moment, we have a maximum conversion efficiency for example of solar panels. All we have, a maximum savings that can be achieved with certain types of technology that are available at a certain moment in time. The theoretical limits normally do not change over time, but the technologies of course can improve by innovation and new products that are invented and brought to the market. A small disclaimer here, sometimes also non-technical constraints are occasionally taken into account when determining the mitigation potential, especially if they make up strong barriers against realization of the potential. The second type of potential that we use is the economic potential. The economic potential is part of the technical potential, so it is always smaller. We only take that part for which the benefits are larger than the costs, so the part that is attractive in economic terms. We not only include pure monetary costs and benefits, but we use a broader definition to include all social costs and benefits. We will get back later to what this means. Let's do an example calculation of the technical mitigation potential. For example, you want to estimate what is the technical potential for photovoltaic solar energy, also known as a PV system on your roof. For which we have a number of assumptions, we assume that your roof is so oriented, it has a right and right inclination and the surface is 10 square meters. The question is then what is the technical potential for solar energy for your home? The annual solar radiation on your roof is 1500 kWh per square meter, and a assume that the conversion efficiency of available solar modules is now 20%. By installing a solar system, you will avoid production by fossil power plants. And assume that in your country, the emission factor of electricity from the grid is 0.4 kilogram per kilowatt hour. Now you can calculate mitigation potential by multiplying these numbers to obtain 1200 kilograms of emissions. These are the emissions from the grid avoided by the maximum application of solar energy on your roof. It's good to also discuss here a number of related issues that have to do with the definition of technical potential. You may say yes, I have now calculated the potential on the southern part of my roof, but in the northern part, may also be suitable for the installation of solar panels. But the solar radiation on the northern facing surface is very low, so it wouldn't contribute mats and it seems very unattractive in economic terms. It is then not uncommon to exclude the northern surface from the technical potential. This is an example of the disclaimer added in the definition of technical potential. Another point may that you say I don't have the money to invest in a solar energy system. That is not a reason to exclude the option. PV panels may become cheaper or the government may provide a subsidy. Anyway, always be transparent on how a technical potential is calculated and the underlying assumptions and exclusions. One point that I want to touch upon is that baseline is important. The baseline is a starting position from which you determine the emission reduction potentials. If you talk about insulation of your home or solar energy on your roof, your baseline is probably quite simple. The baseline is to take no action, meaning no solar energy and no insulation. But if you look at the town or a country, there are a number of things that are important. If you look at the future, there will always be an increase in activity levels, for example, an increase in the building stock. The emissions will grow. You see that depicted here with the orange line. This is how emissions will develop due to the increase in activity. For example, the number of buildings would increase and they would be similar to the existing buildings. We call that frozen technology. That will likely not occur for a number of reasons. First of all, part of these homes will be newer homes. They already have better insulation because of the changed building standards. But also in addition, there will be at least some people that plan to insulate their walls or install solar panels. This is what we call the business as usual development. The emissions associated with this will probably be substantially lower than in the case of the frozen technologies in scenario. So it is important to always clarify what baseline is used. When we look into the future, it is always important to acknowledge that the mitigation potential can change over time. Technology can improve, thus leading to an increase of the technical potential. For example, if solar panels would become available with a conversion of 50%, the solar energy mitigation potential for your home would substantially increase. Similarly, cost reduction of technologies may lead to an increase of the economic potential. Having discussed today's topics we come to several conclusions. Firstly, calculation of mitigation potentials would always be about what can be done with maximum efforts. But sometimes other constraints are taken into account. And that is why it seems..." "Energy Switch kick-off meeting 10 March 2022","https://www.youtube.com/watch?v=OO5e3SIg7i8","I'm finally here at the very end of the day. I'm going to be in this morning's Green Village, and then in the Bisoner in Co-creation Center. What's in the context of energy, switch, and what's the whole morning's name, because I'm going to have a co-creation, and I think that's what I need to do here. The main thing is that the energy is in the middle of the day. And the main thing is that the factor is there. The same as the weather is all the best. I'm going to ask questions about the energy, and the question is, is there no option? I don't know how much that's in our small energy. I have to learn a little bit. I have to learn a little bit. I have to learn a little bit. We as agencies assistant do see this with the energy switch, and the vast range of energy transition it produces. So much simpler, and we have to do it many things and discover everything. The way the solar bay of the trees produces icazies through scope of larger period. or in any way. however there's none on the sea. We have been at an innovative level a few times. an energy test which you have not fully achieved. Energy flow are not 42 enterprises, they willcular achieve and trigger the category. The energy transit lock composes many persons and applications. We should also make a discussion about a Muscle knew and something about work. Work experience, checking some olhar forancy and consultation ownership. This has led us to work with other patients who take care of these certain needs and her with other attentive services and alternatives. It's only to refer them. A great question! Place Lies Lighten Modernpieces talkin' enough for guns on the band. So, in this case, there are many ways to trust온essions. Eating meat is your greatest dream to prepare bananas as food. Traditionally, you can prepare everything for your również rinse out. Flying raising ice creams. EXIT! Cheers!" "Dr. Fatih Birol's interview for: ""Inclusive Energy Systems- Exploring Sustainable Energy for All""","https://www.youtube.com/watch?v=PM1-aMUh0ls","I believe energy is very, very interesting topic. In terms of the questions, it is a topic which is a good bridge between the computer, the homeworks and the international events they see in television, and it is everywhere. Energy is economic, energy is geopolitics everywhere. So if they want to understand the big issues that are delivered to understand energy, study energy is very, very good choice. Plus, I believe in terms of the career, in terms of having access to the financial resources to make money to find good jobs. Energy has a wonderful, wonderful choice, and energy is in my view, one of the very few areas where you can make a difference in the world if you want to make a difference in the world. So my suggestions to work on energy to all the students, but not all the work, but to work very hard. I believe that energy is very, very good choice. So I believe that energy is very, very good choice. So I believe that energy is very, very, very good choice. So I believe that energy is very, very, very good choice. So I believe that energy is very, very, very good choice." "Closure Works Course: Use of Cable Cars for Closing Estuarine Gaps","https://www.youtube.com/watch?v=VSJfMafVgdU","We are now at the Gravelling Endowment. This is one of the secondary down close to the other of the Delta Works. This gap was closed with a cable car. There were here two channels and in total total length of 1700 meters of cable car was constructed over these channels. Two pylons were used for that each covering a gap of 600 meters. Normally, cable cars are anchored in rock. There is no rock in Holland. Therefore, we had to make our own anchoring point. This concrete structure here was used as the anchoring point of the cable car. The carrying cable of the cable car was wrapped around that circular part you see over there. And then, in the went that way to the pylons. They used this to dump stones into the history. And it was introduced to the people at that moment in the following news school. This is the same as making a noise in the exhaust pipe comfort engine over there. It had 2,500 meters of power in the roof openings. That's how it was connected. Water Chief volunteers have to use a closed-turvy cable car Two-turvy responsible workers with 15-اثbnets inah- dergue, both responsible firemen as fast as it can. They offer the operation of the switches, whatever... They then go to anchor to the debris, establish a royal court aware of their PCM test. The estimates of Islam also received a determination he provides them with denied identification, combined with the first sufficient tax allowed in other cases. This day there had already begun and reached the final zone He witnessed the most sacred statement of the Cahbal sioveric water He said Sunday motivates them to move to the Revitational ligiferation But they wanted to remind me After the conversations reached 2013 political attack The difts and flight. The gloves were optimized. The lift control at the bay says that air conditioning area was safely and then there happened to several unknown officers and discovered quick errors. Everyone is spiritually capable of means of completing experiments. The Gaveling Dambas built near a tile divide, so closing was very simple. But as was explained in the previous video, we wanted to test the method of closing again by a cable car. Basically the system worked quite well. The drawbooks were that loading nets with rock was quite some time. Also lowering the nets for Dambas was a very time consuming operation. Lowering nets was needed because otherwise the rock segregated too much. And arrived at the bed not as well-graded as we want. Based on these experience, it was decided to continue for future closing operations is artificial blocks, concrete cubes of 1 cubic meter. The cable car closings were done using the dumping stones into the water. However, it proved that dumping stones from nets was a little bit difficult and therefore in later states we used concrete blocks to dump. concrete blocks of 1 cubic meter. All of these blocks were used to close the browser down 10. Later on it was the intention also to use the same blocks for closing the east-scheld. However, we changed plans and east-scheld was not closed by blocks. So a number of these blocks were left over and as you see they are now used as all kind of obstructions to prevent cars to be parked in a wrong place. One of the big advantages of a vertical closure using a cable car is that the float pattern over the Damb is rather smooth. This can be seen in this picture. Because of the regular float pattern, while only the rock size of the clothing in the wheel is also the extent of the wet protection can leave you less. Combining the consistency of a piece of an alright bucket like this can cause doubts. Forduing, and the Research and resolve system had to end up in trillion. These areas were紫ured with information throughout the Netherlands. They were 13Me near the old base. So, we had to check that environment. LunSoos take 1.4-block of the field shell for car will be built. Irligate in total 140,000 soon. CarlS acrobat can be located within 10,000 people. To the configuration and special locking area. Before the using Kiumsno, they choseスンルレアナド. The engine got their sighted as a tuner for the glintdater of the Husband. Yeaa & Specialmer is quite reasonable for this vehicle. The failure was tolerated by the Indiana Adjustment shop. between locals and just around Wgt. A Left Toe Car. defenses envy the love развит attitude during a revolutions blocking. In turn, it is proportional SE. Heat has reached its 100m range at 13,157m. The diesel engine weigh the border retrieval The steering wheel is connected to the other side. The steering wheel is connected to the steering wheel as the steering wheel is connected to the steering wheel. With the sound of the weekend, I'm going to do a nice job. 9 hours per week. But we're on the way to the steering wheel with 120 blocks per week. That means that the steering wheel is connected to the other side. On some side, the steering wheel is connected to the rear wheel. And it's a bit too much. The rear wheel is connected to the rear wheel with a speed. This side is one of the windows that is connected to the steering wheel. But thanks to the rear wheel, it's not a difference. The rear wheel is connected to the rear wheel. The rear wheel is connected to the rear wheel. 720 million people are going to get the cost. But the rear wheel is still important. So, the rear wheel is connected to the rear wheel. As we know, the definitive development of the new rear wheel is connected to the rear wheel." "Closure Works Course: Introduction","https://www.youtube.com/watch?v=78Hzx_V_msQ","Hello, my name is Henk Jan Verhagen and I'm associate professor in hydraulic engineering at the University of Technology. I will be the course leader on this Prof. at course on coastal closure works. A few words about myself. After my studies at the University, I joined a contractor and I worked there mainly in mathematical modeling for closure works and other works. After that I moved to the Ministry of Public Works where I was in charge of the final part of the Delta Works in the Netherlands. After that I did also some consultancy work at the Ministry. After that period I moved to IIT here and there and there I was in charge of the education of coastal engineers from all over the world. So 16 years ago I moved to this university and became associate professor in hydraulic engineering and teaching close your works, breakwaters and stone structures. Maybe also relevant to mention is that I also asked as a reviewer for the SEMAN GAME project in Korea. In this course we will discuss the technical aspects of closure works so we will not focus on the societal and environmental impacts of the work. These are very important things but they are outside the scope of this specialized course. You will find the background information for this course in the book and in the course material that you find all the internet. The book is digital to the available and will be in the latest stage also be available as a hardbound book. You find on the internet exercises, mini lectures and so on. The exercises you have to do and to send in to upload them to our server. They will be connected by me and you will get a response in the week after that. Apart from the response from me you also will get an automatic response of the system is the standard answers. After the completion of the whole course you will get a certificate of your participation. The idea is that on every Monday or every Tuesday you upload your results of the test to the system and then I will correct them in the week after that. Please follow the scheme as is mentioned in the introduction of the course on the computer. I wish you good luck this course and a lot of success." "Advanced Leadership for Engineers - Leading Teams","https://www.youtube.com/watch?v=5ZwYLEg1nxI","As engineer, you are used to working in project teams to achieve complex tasks to change the world for the better. Perceived incompatibilities or differences among team members can easily lead to conflict. Being a leader, how do you deal with the different kinds of conflict in teams? In my profession, even the smallest mistakes can have enormous consequences. Not only is the profitability of my department at risk, so is the environment and the safety of my clients. There is simply no room for error in my project team. All team members know what they are doing, maybe too well, they all think they know best. Their views conflict almost all the time and sometimes it gets personal. At the same time, we must trust each other in order to make no mistakes. How do I cope with this? We will explore group dynamics and conflict in teams in this professional education course. I challenge you to practice the skills that you learn within this course in your own workplace to immediately see results. Join a network of fellow engineers pursuing and holding leadership positions. Join the course Advanced Leadership for Engineers, leading teams, organizations and networks. Join the course Advanced Leadership for Engineers, leading teams." "Spacecraft Technology: Rocket and Onboard Propulsion","https://www.youtube.com/watch?v=pzExTmru9H4","Welcome to the Propulsion Part of the Spacecraft Technology course. My name is Andrew Rocher-Bone and I am Assistant Professor in the Space Engineering Department at the University of Technology. This part of the course is divided in three chapters. In chapter one, we will discuss the main equations used to characterize the performance of a rocket or propulsion system and we will take a closer look at liquid and solid propellant and engines. Chapter two will be about electric propulsion and advanced concepts. Finally, Chapter three will focus on micro propulsion systems. In chapter one, we will first recall the basic superpulsion including the concept of thrust and specific impulse. Then we will briefly mention the different types of propulsion and we will see how the performance of a rocket can be characterized in a simplified way by means of the ideal rocket theory. We will take a closer look at liquid and solid propellant engines and finally we will try to understand how a real rocket works and what are the most typical deviations from the ideal rocket theory. Let's get started with our introduction. Here you see a nice picture of the European rocket R&5 at launch. I am sure that you have seen many pictures of big rockets similar to this one but do you know what is exactly rocket propulsion and how does it differ from aircraft propulsion? We should refer to rocket engines as pure reaction systems. In a rocket engine a very large amount of fluid or propellant is expelled at very high speed in a direction opposite to the direction of light. Remember Newton's third low emotion. Every force is always associated to a reaction of equal magnitude and opposite direction. Thus if the rocket pushes the propellant out the propellant in turn pushes the rocket in the opposite direction. This is the same principle on which aircraft propulsion works. But in aircraft engines the fluid accelerated by the engine comes from outside. While a rocket needs to bring its own propellant on board. From the introduction to spaceflight course you should already be familiar with the most important performance parameters over rocket. The thrust is simply the force produced by the rocket and is obviously measured in Newton. The specific impulse is defined as the ratio of the total impulse generated by the rocket to the total weight of propellant used to generate it. It is typical measured in seconds. The delta V is the velocity change in meters per second experienced by the spacecraft in which the rocket is installed when a given mass of propellant has been expelled. In introduction to spaceflight the equations for these three performance parameters have been derived and discussed. Here they are in a nutshell. We are not going to derive them again in this course. I will just shortly recall some of the most important aspects associated to these equations. Let's start with the rocket trust equation. If you look at it carefully you will notice that the thrust is a combination of two different contributions. The first one is what we call momentum term. This is the first generated by the actual momentum exchange between propellant and rocket. This term is proportional to the mass flow rate of propellant and to the velocity at which the propellant is expelled or jet velocity. The second contribution is called pressure term. This term is generated by the difference between the pressure at which the propellant is expelled from the rocket exit section and the surrounding ambient pressure. Let's take a closer look at the pressure term. It is clear that this term is a function of the ambient pressure and thus the altitude at which the rocket is flying. Maximum thrust will be achieved in vacuum, where ambient pressure is zero. In many cases the pressure term is much smaller than the momentum term and can be neglected. However this is not always true. The pressure term can be significant in big rockets flying at low altitude. Take a look at this example showing the thrust of a space shuttle managing as a function of altitude. You can clearly see that the thrust is practically constant at altitude higher than 25 kilometers, but decreases significantly at low altitude. In order to simplify the thrust equation we can define an equivalent or effective jet velocity. This parameter takes into account both the momentum and pressure term and allows to write the thrust in a compact way, simply as a mass flow rate multiplied by velocity. Keep in mind however that the equivalent jet velocity as no physical meaning. It is a pure mathematical entity used to simplify the way how equations are written. Another very important performance parameter for a rocket is the specific impulse. What you see here is the complete mathematical definition of this parameter. In short it is defined as the ratio of the total impulse generated by the rocket. The thrust integrated over the burn time to the total weight of propellant used to generate it. This equation can be written in a much simpler way if the equivalent jet velocity is constant over time, which is true as we will see in many cases of practical interest. In this case, recalling that the thrust is simply mass flow rate times equivalent jet velocity, the specific impulse is simply equal to the equivalent jet velocity divided by the gravitational acceleration. Remember that the gravitational acceleration used to calculate the specific impulse is always the value on earth at sea level, equal to about 9.8 meters per second squared. In dependently on the place where the rocket or spacecraft is flying. The last important performance parameter is the delta V, usually calculated by means of this equation. This is known as rocket equation, or COCob's equation from the name of the scientist would derive it for the first time. This equation gives the velocity change of spacecraft with initial mass m0 when a mass mp of propellant is used by its propulsion system of given equivalent jet velocity. However, this is true only under a number of assumptions. First, there shall be no external forces acting on the spacecraft, such as gravity or atmospheric drag. Second, the equivalent jet velocity shall be constant over time. Finally, the propellant shall be expelled in a direction exactly opposite to the flight direction. You can easily imagine that in many practical cases, these assumptions are not all true. When at least one of them is not met, then the delta V, calculated by means of the rocket equation, is not anymore the actual velocity change of the spacecraft. But it is still a good indicator of the energy transferred by the propulsion system to the spacecraft. We have now seen a short overview of the main performance equations of a rocket. In the next video, we will look at the most important types of propulsion and how their performance can be characterized. Thank you for your attention." "Online Learning Experience - The teacher's perspective","https://www.youtube.com/watch?v=Kg_r1w6uznc","We think by providing a mix of activities we include a lot of people and also we meet different preferences for learning styles because we also think that you can learn design by doing and by looking also at each other's work. I think they are quite active so actually we're surprised these year we're really happy to see so many people joining the discussion also introducing themselves about our world map. This idea we try to make things really more interactive and really more interesting also in the variation providing a variation in type of exercises. I think this is really important for the motivation of the students actually. Yeah we try to really engage the student into the course by well it's actually very is when we try to motivate them first of all using a lot of context to our problems, problem for practice just to show them that it's not just bare mathematics it's really good for you for your engineering, you're your engineering study which will get into. So I think that's really important providing the context also the interactive exercises and also the variation in types of exercises. The biggest success of this course I think is that professionals bring in their own problems from their own companies to one case and students can work also in that case. For students it's really motivating because they work on real life problems. But also for the professionals this is good because students get them new ideas for their problems and it's really a win-win situation for both groups. Yeah we follow every week what everyone is doing we can see it on online learning assignments what everyone has done. Another special thing about the course was that since we provided the source code and since we had a lot of simulations since we produced all of the content on the fly essentially we could have the students. We could have the students grab all the materials and modify anything they would like. So it's a general thing for online education I think being able to disassemble what you see and see how it's been done it is quite a lot of edit value. It's partly science and partly an art the science part is the first the lecture that I do the teaching the theory that I get you. The art is that you then have to work with that and that is much more difficult because all in the sun and there's nobody telling you what's actually a B and C are you got to figure out which alphabet is and what do you all are it's. They are active they're busy with the problem that's the good way they're not only listening at best of way and they can do it at home by the way. In our gloss plate example for example that we showed them a little bit the way we are thinking on a problem so we work out a complete problem in the way we think that's the best way to do it start with the sketch and then work it out. I have a personal teaching style and it's very difficult for me to come at a very a style of students have these age you get a more open discussion you get a more open way of trying and letting the student be active. I make a more or that she first wants to click and not see what the animation does it's up to the students and that's really very individual there's very tough if you do it as a teacher because you're automatically go to how you teach. I think thinking about online learning also improve your own education. Because you are rethinking everything how you present what are the clothes what are really the learning objectives what I wanted to because in a small movie you have to do it in five or ten minutes. Now you have to certainly the right things you have to put in it and that's I think the strength of thinking and rethinking your own nature force to think about your educational style because students cannot react on this part. If they react they do it on a forum and those are in a kind of a waiting queue before I can respond and that means that my message should be more clear. Should have less traps where they can be trapped. If they they read that's an advantage they can read read or read see the material but they should not stop at the exact same track. It should somehow help them to move on with the ordinary way. Yeah absolutely. I really like it. We'll get a fun make it. That's the most important thing." "Carpe Diem A new day for flexible MOOC design","https://www.youtube.com/watch?v=ZiK_19HarmM","How do you make the making of a MOOC a great experience? A Telft University of Technology, we have developed 25 MOOCs and over the past years gained a lot of experience in the design and development of MOOCs, online and blended courses. The design and development of these courses is supported by our e-learning developers, who face an interesting challenge, namely the diversity in courses. The course topics range from engineering and design to science and the level ranges from pre-university to master's or PhD. How do we, as e-learning developers, support so much diversity in a personalized way and ensure course teams receive the support that suits their needs? And which approaches can support this flexible style of advising course teams? This is why we decided to choose an approach that is both very simple in its use and takes into account the specific nature of online courses. The approach makes the course design explicit on a timeline and allows for interaction in the course team. Therefore, we chose to work using the Carpidium approach from Jelly Salmon. Here are the basic steps taken in the Carpidium approach as I was explaining to a MOOC course team. So, the stages of Carpidium, they go from the blueprint that I just talked about, the sort of a post or idea to the storyboard that we'll do there. So this is what we, the storyboard will start with today, I think. And then we make a prototype, you know, this is now what we agree on, this is what we start working from and what we can then decide to do is do a reality check, so we'll have to organize that, where from some of the lectures that will be involved, some of the other course team members, maybe some other students will look at the plan and shoot on it and say like, wow, I think one that would be nice if you do it like that, so that you really create a very beautiful course. So once we're ready for this, we can organize that and that's when you can now review your course and make an action plan and decide all the works. You just saw one of the teams I currently support in the design of their MOOC on visualizing the unimaginable. For this paper, I've collected or experienced in using the Carpidium approach in the design of five MOOCs and five online courses. I will focus on our experiences at the extension school of TU Delft in facilitating the design of MOOCs. Let me start with explaining the challenges I experience as a learning developer. Can you imagine this situation? Today, I meet the course team of a MOOC on visualizing the unimaginable. I have met the course team leader but the team comes together for the first time and I wonder why do they want to offer this MOOC? How will the course team collaborate? Do I give a workshop a gradually introduce the storyboard? Do they expect a content-driven course? Have they considered their learners? I will now show you an example of how this particular course leader approaches the course design. He is a very motivated lecturer who is willing to invest a lot of personal time in his MOOC. He has already started building a prototype of the course in the edX platform by himself. However, this is the first meeting he has with his course team. Luckily he reflects himself on one of the challenges I often deal with. At the same time, you hear my response to accommodate the course leader. Is there a very big risk for example starting building at the same time? I think that's a very good question. I think you should do it the way you like. But you should keep in mind and you should be open to like, hey, if we do this more a process where we ask people for great coffee back and should be ready to take it. So I can do some things which I later on has to do with it. Yeah. And sometimes it means a lot of that you do too much work and then it takes you too much time. This example is representative for my experiences in the need to use the Carpidium approach in a flexible way. In this case, the course leader has already started a design and even applied it in the edX environment as prototype. Without using the input of his team, he prefers to see what it will look like before he starts designing. However, he is also very reflective on the possible dangers of this. So I focused in my advice on making the course leader aware of the risk of this approach. But if he prefers to work in this order, then I will adapt my approach to his preferred order. In the video, I showed you one example of how I have introduced the Carpidium approach to a course team. In the article, we described possible solutions for two main challenges. Firstly, how do I introduce the value of the CD approach to the course team? As you've seen, one of the proposals I have for adapting the Carpidium approach is to be flexible about the order in which the approach is used. Usually, I just start putting down post-its on the table and it forms the table gradually while we are talking. This I have done especially with a course team that are less visually oriented. Usually, course teams of industrial design are more visually oriented and the storyboard exercise resembles their normal way of working. The second challenge I describe in the article is how I use the Carpidium approach to explain what the nature is of a MOOC compared to a regular course. Most lectures have an audience in mind that resembles their regular students, but may not always consider a much more diverse and larger audience. Hey, Shunning! If you would like to hear more about how we have used the Carpidium approach to support course teams and which proposals we have come up with so far to use the Carpidium approach in a flexible way, please come to the session at the conference. We are looking forward to see you there or in the other delft session, where we present how we evaluate teaching and learning in MOOCs."