Sunteți pe pagina 1din 8

Distributed Tic-Tac-Toe: Requirements, Design & Analysis

This is a small case study in application design representation and analysis. For the code and documentation, see http://www.se.rit.edu/~swami/dttt. Functional requirements 1. Users connect to a central server. The server pairs them up to play games with each other. 2. The gameplay follows the typical Tic-Tac-Toe pattern. The system must enforce the rules and automatically detect wins. User stories A list of use cases / user stories with variations and exception scenarios. 1. User connects to a server and is assigned to a particular game with another player. a. User connects to a server and waits for a partner. b. User tries to connect to the server and fails. 2. User plays a game of Tic-Tac-Toe with another user. a. User plays and wins the game. b. User plays and loses the game. c. User plays and the game ends in draw. d. User quits the game partway through. 3. User makes a move a. User makes a valid move. b. User attempts an invalid move. c. User waits a long time without making any move. d. User sees the opponent make a move. 4. Adminstrator starts up the server. a. Administrator terminates the server. The variations and exceptions are mildly interesting e.g. 3c. This application is so simple, it is barely worth writing use cases and variations the simple numbered requirements above told us nearly as much. Operational profiles Operation User launches client and connects to server User plays a game with another user User makes a move Adminstrator starts up server Frequency 100 / day 40 / day 400/day 1/week.

This is an uninteresting set of operational profiles, because there are so few operations, and their relative frequency is very obvious. There is only operational mode (unless the debug mode of the server is considered a separate operational mode). Functional behavior We could draw a state diagram for the server. Startup Shutdown Ready Player joins / Game started This adds only a little value over the functional requirements statement. However, this does say something more: that players are paired up in the order they arrive, not in a random order. Thus the state diagram is expressing some clearer semantics of just how the server behaves. We might also use pseudocode to represent the conceptual logic of the GameController. Note that we have moved on to requirements analysis or design. Setup game board whoseTurn = 1; // player 1 goes first While not game over Get move from player Implement move on board Send board status update to players If player won Declare winner and loser Else if no more room on board Declare game to be a draw Else Flip turns We could write pseudocode for the get move from player operation Check sockets to see if any messages available If messages available Receive message Parse message Implement command These could just as easily be shown as sequence diagrams or control flow charts. It is definitely worthwhile drawing sequence diagrams for this logic. Player joins

Waiting for second player

The pseudocode hides a design decision that will be more evident in sequence diagram: is it the gameController that checks for messages from its players, or is that common to the entire server? The gameController can actually just wait for messages from a single player, rather than listening to both. [In the current implementation, the server performs a common check for all the gameController. This is due to the convenient presence of this facility in the SocketSet library, and because this way we avoid the complexities of multithreading]. Structure It is definitely worth drawing a class diagram. Now we are clearly in the design phase.
0..n

Game Controller

Server
1

SocketSet library

Player

The major design decision here is that there will be a GameController class, and that the SocketSet library will be used for communication. Deployment view This ends up being much more interesting than the other views we have seen so far. This is not surprising, because the distribution is indeed the primary source of complexity in this application. User logs into Unix box from client workstations, then starts up DTTT game User User User User Remote login s/w TCP/IP over EtherNet LAN Users Unix box DTTT client Server, GameController Users Unix box DTTT client

This adds a new perspective: users may not be directly using the client application, but might be working from a personal PC and remote logging into a Unix box to run the client. This indicates additional hops through the network that must be taken into account for performance. Another interesting design decision here is that the gameController will be co-located with the server (this is not the only possible design). Message sequence Another very interesting design view for this application is the message exchange protocol: Player1 Connect Id Wait State YourTurn Move State Server Player2

GameController

Connect Id State

State YourTurn Move State

State Repeat till game over Result

Result

This shows the flow of the game and the message exchanges very clearly and would be very helpful in implementing both the gameController and the Player logic. We could show an alternative scenario with a Quit in it. Performance goals

Response time on moves < 100 milliseconds (i.e. instantaneous) Server and client startup time < 500ms (practically instantaneous) Server should add be able to support up to 50 users, and add less than 5% to the load on the server machine 60 players playing simultaneously should consume < 5% of the available network bandwidth.

Performance analysis Use deployment diagram + get move pseudocode to identify the key elements of response time on moves: 1. Time to obtain input from user 2. Time to send message to GameController 3. Time for GameController to generate State messages 4. Time for messages to reach Players 5. Time for display of state to Player For 1, if we ignore waiting time for user input (not part of response time), the time for the input operation would be that for a few hundred instructions = few microseconds. To this, we must add the network delay caused by the use of remote login. Let the networkDelay be nD (typically of the order of a few milliseconds). Thus the total time for 1 is approximately nD. For 2, the time taken is approximately nD. We must also add the time for the receive() operation at the GameController. receive() itself would take about 50 microseconds (can benchmark these system calls). If there are 50 users all transmitting simultaneously, then this could be a significant source of delay. For 3, the time is that for a few thousand instructions again, less than 1 millisecond. For 4 and 5, again the time delay is approximately nD in each case. 4 also has a receive() delay added, though it is less because the client is only servicing 1 source of messages. Thus the total delay is approximately 4 * nD, plus the receive() at the server. Assuming that users take 2 secs to make moves on average (fairly quick), a message would arrive at the server every 2 / 50 = 40 milliseconds. Each message may take about ~5ms of TCP/IP processing (this can be benchmarked) + the server will add < 1 millisecond of processing. Thus, the server is moderately heavily loaded, and receive delays can be of the order of 50-100ms (assuming the packet gets backed up with about 10 packets ahead of it thats actually rare). The network delay would depend on the load. We can benchmark to get the network delay under normal load: probably 20-50ms. Plugging in these numbers, the response time is 100-250 ms somewhat worse than the goal of 100ms. The startup times for the server is all computation. It would take only a few hundred microseconds (typical processors can do ~10-50 lines of simple C/C++ code per microsecond again this can be benchmarked). So we would be well under target.

The startup time for the client is all computation, except that we would add 2 network delays (to send a message back and forth to the server) should still be well under 500ms. As we saw earlier, the CPU load contributed by the users is going to depend heavily on the receive() processing mostly the TCP/IP code itself. We need to benchmark to determine this, but it could be significant e.g. 5ms per message (will probably be quite a bit shorter since the messages are short). If messages arrive every 40 milliseconds, and take around 5ms to process, the CPU load could be as high as 5/40 = 12.5% in the worstcase. To calculate the network load, we can look at the average length of messages. If 1 move is made every 40ms, and for a move there are 4 messages, that would be 1 message every 10ms on average. The length of the message from the applications perspective is about 20 bytes or less, but the TCP/IP headers may add quite a lot to this, say another 30 bytes. This would be 50 bytes every 10ms, or 50*100 = 5000 bytes per second. For a 10Mb/sec (megabits, not bytes) network, 5KB per second would be less than 1% load, so there is no problem here. All this was very approximate, but what it told us is that the startup constraints are no problem at all, nor is the bandwidth. Response time could be less than expectations, and the TCP/IP and receive() functionality is the major contributor to CPU load when there are lots of users. The takeaway from this is that it is possible to do simple analysis that gives us a firstorder understanding of what to expect from the application. You are welcome to benchmark and confirm or disprove the analysis (I havent!). Dependability analysis

Goals: Server should not crash if there are client problems. Invalid input from the user should not crash the client.

Possible client-related causes of server crash: 1. Client crash causes socket to close and 1 player to vanish. 2. Client problems cause server to receive invalid messages 3. Client problems lead to long waits for server 4. Client problems lead to flooding of server with messages. Some tests with the SocketSet library show that 1 is not a problem. If a socket closes, the SocketSet will stop picking up messages from that socket, but will not crash. 3 also turns out not to be a problem. Because of the use of poll() in the SocketSet library, one hung client will not block the server.

2 need not be a problem. We could design the server so that it detects and rejects invalid messages. However, the current design uses exceptions, and they are caught only at the outermost level, and they cause the server to exit. So our current implementation (and detailed design of the exception handling mechanism) will not satisfy 2. Also, there is a major robustness hole, in that the server does not explicitly validate the client messages. So if the client somehow produces invalid messages, this could put the server into a bad state and cause a crash. 4 will not crash the server, but it will severely degrade its performance. We need more fancy design techniques to deal with 4. The client implementation does validate all input from the user, so user input is unlikely to crash the client. Also errors from user input will not propagate onto the server. Usability analysis The operability of the user interface is poor very hard to see and enter numbers correctly. It is possible to enter invalid values, which then lead to error messages. This is not as good as preventing invalid inputs (OK, but could be better). Efficiency is poor have to push <enter> after every command. Learnability is OK the basic display is quite clear to any tic-tac-toe player. The long instruction is a bit uncomfortable and takes some getting used to. There is a specific operability problem that would be clearer if we drew a state diagram of the Player interface: Quit will not be processed while it is the other players turn but this is the primary purpose of Quit, so that if the other player is not responding, you can quit the game. There is a problem with the configurability: to change the server name or port, it is necessary to recompile the application. Evolvability analysis Portability of the Player interface (i.e. switching to a different interface technology) would be difficult: the interface is not modularized but hardcoded into the Player class. The server design would scale well to many players. The debugging messages at the server would be very helpful for bug detection and fixing.

The basic packaging into classes is good enough for reasonable extensibility in terms of adding new features e.g. adding more commands, changing validations etc. (OK, could be better). Summary The point of this small case study was to give a flavor for the whole area of design representation and analysis with a small application. Our analysis is not complete we could have used many representations and techniques to go into more depth but it gives us a start on understanding the behavior. These are some of our takeaways about representation and analysis from the above: This application was barely complex enough for user stories. The deployment diagram and the message exchange chart would be the most useful because the complexity is in the distribution and the client-server interaction. Class diagrams are nearly always useful. State diagrams are powerful, but this application did not seem to require them much. Even superficial performance analysis was enough to give us an idea of which performance requirements might be challenging and which would not be a problem. We could quickly get an idea of some of the dependability, usability and evolvability problems, and get a preliminary idea of strengths. Based on these results, we could go back and improve the engineering of this application. This approach of choosing and using good design representations and performing analysis takes significant effort, but can be quite comprehensive. How much work it is worth putting into this depends on the complexity of the application and the criticality of the attribute requirements.

S-ar putea să vă placă și