corporateanna.blogg.se

Uc berkeley ee122 project2 ns2
Uc berkeley ee122 project2 ns2








uc berkeley ee122 project2 ns2
  1. #UC BERKELEY EE122 PROJECT2 NS2 HOW TO#
  2. #UC BERKELEY EE122 PROJECT2 NS2 CODE#

HTTP Made Really Easy is a simple tutorial that will teach you most of what To implement the GET request of HTTP 1.1. KeywordHunter must be able to fetch pages from existing HTTP servers Or you give up searching on having crawled exhaustively This process is recursively carried out till you either find SearchKeyword

#UC BERKELEY EE122 PROJECT2 NS2 HOW TO#

To in the current page (more about how to detect these later) and searches them. If the keyword is not found, it fetches the pages linked If the keyword isįound, KeywordHunter just reports the page URL and line (more about output format later)Īnd stops. It searches the fetched page for SearchKeyword. KeywordHunter must use the HTTP GET request to fetch StartURL. Note that an unfound keyword is NOT an error.

#UC BERKELEY EE122 PROJECT2 NS2 CODE#

You can set a program's exit code by calling exit(TheDesiredCode) or by specifying the return value of main(). If some error (eg: invalid command line arguments) occurred, the program must exit with code 1. For example, your program must be invokableĮxit Code : KeywordHunter must exit with exit code 0 if no error occurred. To enable automated testing, the KeywordHunter executable must be called

  • OutputDir : If this parameter is present, then all successfully fetched.
  • Pages linked to from inside B1, B2 and B3 The following example will clarify the meaning of depth: Let us refer to the page at StartURL as The keyword will exist at a depth of 5 or less, startingĪt StartURL otherwise it is deemed to be not found.
  • SearchKeyword : The keyword we are looking for.
  • URL will be of the form (e.g., ) or (e.g., ).
  • StartURL : The URL of the page from which crawling starts.
  • KeywordHunter must accept the following parameters from the command line: We will refer to our robot crawler as KeywordHunter. In the remainder of the project description page, The robot you will build in this project will be much simpler than The Web pages that show up in Google search results. The goal of this project is to build a simple HTTP client thatĪutomatically crawls Web pages looking for a user-supplied keyword.Īutomated Web crawlers are often referred to as robots or spiders.įor example, GoogleBot is the robot that crawls and indexes Web servers (e.g., ) via the HTTP protocol. The Web is a common example of an HTTP client that interacts with

    uc berkeley ee122 project2 ns2

    The browser (Firefox, Safari, Internet Explorer, etc.) you use to browse (Project 3), you will undertake designing a new protocol, as wellĪs exploring peer-based network interactions rather than client/server. That implements an existing text-based protocol.

    uc berkeley ee122 project2 ns2

    This project will also illustrate how you develop a client Skills you gained in Project 1 to write a program that will interact In this project, we will look at a more complexĪpplication that uses the client-server paradigm - the World Wide Web. Programming and to get exposed to the client-server networking The goal of the project was to learn socket In Project 1, you wrote a client which sends data to a server that

  • : Grading rubric for Project 2 changed.
  • : Added test cases and evaluation script (version 0.1).
  • Oct 11, 2006: Added FAQ about usage of libraries.
  • uc berkeley ee122 project2 ns2

    Oct 14, 2006: Added Checkpoint submission instructions.Oct 14, 2006: Added clarifications about checkpoint requirements.Oct 15, 2006: More clarifications about the project specs.Oct 25, 2006: Added latest evaluation scripts and test cases.6:55pm: Error in latest test cases corrected.8:30pm: Test cases and evaluation script used for final evaluation released.Yuanfeng Wen is student from US Berkeley.EE122 Fall 2006 - Project 2 EE122 Project #2: Simple Web Crawler Page Last Updated: Oct 30, 8:30PM Initial checkpoint at 11PM, October 18įull project due October 26, by 11PM Updates and Announcements Yilei Fu and Lihang Gong are concurrent enrollment student in UC Berkeley from Harbin Institute of Technology, China. Our programming language is Python (3.5.2), 64bit. We cannot possibly finish this project if we did not receive Professor Shyam Parekh’s project reference, and we are grateful for his wonderful course this semester. We raise an overview of several future perspectives, like combining Fat-Tree structure and BCube structure. Then, we found the Fat-Tree structure also have some drawbacks, which impelled us to find more new structures and new algorithms. Also we applied flow classification and flow rearrangement on this data center topology. In our teamwork, we simulated a special data center topology: Fat-Tree and applied its routing algorithm. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Today’s data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. EE122Project Topic: Commodity Data Center Network Arch










    Uc berkeley ee122 project2 ns2