Search Results

Search found 11208 results on 449 pages for '200 success'.

Page 30/449 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • SQL Server Efficiently dropping a group of rows with millions and millions of rows

    - by Net Citizen
    I recently asked this question: http://stackoverflow.com/questions/2519183/ms-sql-share-identity-seed-amongst-tables (Many people wondered why) I have the following layout of a table: Table: Stars starId bigint categoryId bigint starname varchar(200) But my problem is that I have millions and millions of rows. So when I want to delete stars from the table Stars it is too intense on SQL Server. I cannot use built in partitioning for 2005+ because I do not have an enterprise license. When I do delete though, I always delete a whole category Id at a time. I thought of doing a design like this: Table: Star_1 starId bigint CategoryId bigint constaint rock=1 starname varchar(200) Table: Star_2 starId bigint CategoryId bigint constaint rock=2 starname varchar(200) In this way I can delete a whole category and hence millions of rows in O(1) by doing a simple drop table. My question is, is it a problem to have hundreds of thousands of tables in your SQL Server? The drop in O(1) is extremely desirable to me. Maybe there's a completely different solution I'm not thinking of?

    Read the article

  • Overwrite Content in PHP fwrite()

    - by Shahmir Javaid
    Is there any way you can overwrite a line in PHP. let me be a little more clearer using examples. My array array{ [DEVICE] => eth0, [IPADDR] => 192.168.0.2, [NETMASK] => 255.255.255.0, [NETWORK] => 192.168.0.0, [BROADCAST] => 255.255.255.255, [GATEWAY] => 192.168.0.1, [ONBOOT] => no } File im overwriting DEVICE=eth0 IPADDR=192.168.200.2 NETMASK=255.255.255.0 NETWORK=192.168.200.0 BROADCAST=255.255.255.255 GATEWAY=192.168.200.1 ONBOOT=no DNS1=195.100.10.1 Result of the File that is rewritten DEVICE=eth0 IPADDR=192.168.0.2 NETMASK=255.255.255.0 NETWORK=192.168.0.0 BROADCAST=255.255.255.255 GATEWAY=192.168.0.1 ONBOOT=no DNS1=195.100.10.1 Note that DNS1=195.100.10.1 Stays in the file becuase it dosent have a key with the value of DNS in our array. Thanks

    Read the article

  • How to set the size of a wx.aui.AuiManager Pane that is centered?

    - by aF
    Hello, I have three panes with the InfoPane center option. I want to know how to set their size. Using this code: import wx import wx.aui class MyFrame(wx.Frame): def __init__(self, parent, id=-1, title='wx.aui Test', pos=wx.DefaultPosition, size=(800, 600), style=wx.DEFAULT_FRAME_STYLE): wx.Frame.__init__(self, parent, id, title, pos, size, style) self._mgr = wx.aui.AuiManager(self) # create several text controls text1 = wx.TextCtrl(self, -1, 'Pane 1 - sample text', wx.DefaultPosition, wx.Size(200,150), wx.NO_BORDER | wx.TE_MULTILINE) text2 = wx.TextCtrl(self, -1, 'Pane 2 - sample text', wx.DefaultPosition, wx.Size(200,150), wx.NO_BORDER | wx.TE_MULTILINE) text3 = wx.TextCtrl(self, -1, 'Main content window', wx.DefaultPosition, wx.Size(200,150), wx.NO_BORDER | wx.TE_MULTILINE) # add the panes to the manager self._mgr.AddPane(text1, wx.CENTER) self._mgr.AddPane(text2, wx.CENTER) self._mgr.AddPane(text3, wx.CENTER) # tell the manager to 'commit' all the changes just made self._mgr.Update() self.Bind(wx.EVT_CLOSE, self.OnClose) def OnClose(self, event): # deinitialize the frame manager self._mgr.UnInit() # delete the frame self.Destroy() app = wx.App() frame = MyFrame(None) frame.Show() app.MainLoop() I want to know what is called when we change the size of the panes. If you tell me that, I can do the rest by myself :)

    Read the article

  • giving a query as part of a uri in autobench

    - by Deepika
    I am using autobench for doing becnhmark. An example of autobench command is as shown below. autobench --single_host --host1 testhost.foo.com --uri1 /index.html --quiet --timeout 5 --low_rate 20 --high_rate 200 --rate_step 20 --num_call 10 --num_conn 5000 --file bench.tsv** The uri which I have to specify has a query attached to it. When I run the command which has the query, I get the following result dem_req_rate req_rate_localhost con_rate_localhost min_rep_rate_localhost avg_rep_rate_localhost max_rep_rate_localhost stddev_rep_rate_localhost resp_time_localhost net_io_localhost errors_localhost 200 0 20 0 0 0 0 0 0 101 400 0 40 0 0 0 0 0 0 101 600 0 60 0 0 0 0 0 0 101 800 0 80 0 0 0 0 0 0 101 1000 0 100 0 0 0 0 0 0 101 1200 0 120 0 0 0 0 0 0 101 1400 0 140 0 0 0 0 0 0 101 1600 0 160 0 0 0 0 0 0 101 1800 0 180 0 0 0 0 0 0 101 2000 0 200 0 0 0 0 0 0 101 The query request, response are all zeroes. Can anybody please tell me how to give a query as part of the uri? Thank you in advance

    Read the article

  • on click checkbox set input attr

    - by Tommy Arnold
    html form with 4 columns the first 2 columns are the sizes inside input boxes with disabled ='disabled', when they click radio button to select a size a checkbox appears, when they click that checkbox I would like to change the class and disabled attr of the inputs on that table row to allow them to edit the input box <table width="388" border="1" id="product1"> <tr> <td width="100">Width</td> <td width="100">Height</td> <td width="48">Price</td> <td width="65">Select</td> </tr> <tr> <td><input type="text" disabled='disabled'value="200"/><span> CMS</span></td> <td><input disabled='disabled'type="text" value="500"/><span> CMS</span></td> <td>£50.00</td> <td><input type="radio" name="product1" value="size1" /> Customise<input type="checkbox" name="custom[size1]" class="custombox" value="1"/></td> </tr> <tr> <td>200</td> <td>1000</td> <td>£100.00</td> <td><input type="radio" name="product1" value="size2" /> Customise<input disabled='disabled' type="checkbox" name="custom[size2]" class="custombox" value="1"/></td> </tr> <tr> <td>200</td> <td>1500</td> <td>£150</td> <td><input type="radio" name="product1" value="size3" /> Customise<input type="checkbox" name="custom[size3]" class="custombox" value="1"/></td> </tr> </table> <table width="288" border="1" id="product2"> <tr> <td width="72">Width</td> <td width="75">Height</td> <td width="48">Price</td> <td width="65">&nbsp;</td> </tr> <tr> <td>200</td> <td>500</td> <td>£50.00</td> <td><input type="radio" name="product2" value="size1" /> Customise<input type="checkbox" name="custom[size1]" class="custombox" value="1"/></td> </tr> <tr> <td>200</td> <td>1000</td> <td>£100.00</td> <td><input type="radio" name="product2" value="size2" /> Customise<input type="checkbox" name="custom[size2]" class="custombox" value="1"/></td> </tr> <tr> <td>200</td> <td>1500</td> <td>£150</td> <td><input type="radio" name="product2" value="size3" /> Customise<input type="checkbox" name="custom[size3]" class="custombox" value="1"/></td> </tr> <table> CSS input[type=checkbox] { display: none; } input[type=checkbox].shown { display: inline; } input .edit{ border:1px solid red; } input[disabled='disabled'] { border:0px; width:60px; padding:5px; float:left; background:#fff; } span{float:left; width:30px; padding:5px;} Jquery $("body :checkbox").hide(); // The most obvious way is to set radio-button click handlers for each table separatly: $("#product1 :radio").click(function() { $("#product1 :checkbox").hide(); $("#product1 .cbox").hide(); $(this).parent().children(":checkbox").show(); $(this).parent().children(".cbox").show(); }); $("#product2 :radio").click(function() { $("#product2 :checkbox").hide(); $("#product2 .cbox").hide(); $(this).parent().children(":checkbox").show(); $(this).parent().children(".cbox").show(); }); This is what I thought but its not working $("#product1 :checkbox").click(function(){ $(this).parent("tr").children("td :input").attr('disabled',''); $(this).parent("tr").children("td :input").toggleClass(edit); }); $("#product2 :checkbox").click(function(){ $(this).parent("tr").children("td :input").attr('disabled',''); $(this).parent("tr").children("td :input").toggleClass(edit); }); Thanks in advance for any help.

    Read the article

  • Zend_Table_Db and Zend_Paginator num rows

    - by Uffo
    I have the following query: $this->select() ->where("`name` LIKE ?",'%'.mysql_escape_string($name).'%') Now I have the Zend_Paginator code: $paginator = new Zend_Paginator( // $d is an instance of Zend_Db_Select new Zend_Paginator_Adapter_DbSelect($d) ); $paginator->getAdapter()->setRowCount(200); $paginator->setItemCountPerPage(15) ->setPageRange(10) ->setCurrentPageNumber($pag); $this->view->data = $paginator; As you see I'm passing the data to the view using $this->view->data = $paginator Before I didn't had $paginator->getAdapter()->setRowCount(200);I could determinate If I have any data or not, what I mean with data, if the query has some results, so If the query has some results I show the to the user, if not, I need to show them a message(No results!) But in this moment I don't know how can I determinate this, since count($paginator) doesn't work anymore because of $paginator->getAdapter()->setRowCount(200);and I'm using this because it taks about 7 sec for Zend_Paginator to count the page numbers. So how can I find If my query has any results?

    Read the article

  • nodejs response speed and nginx

    - by user1502440
    I'm just started testing nodejs, and wanted to get some help in understanding following behavior: Example #1: var http = require('http'); http.createServer(function(req, res){ res.writeHeader(200, {'Content-Type': 'text/plain'}); res.end('foo'); }).listen(1001, '0.0.0.0'); Example #2: var http = require('http'); http.createServer(function(req, res){ res.writeHeader(200, {'Content-Type': 'text/plain'}); res.write('foo'); res.end('bar'); }).listen(1001, '0.0.0.0'); When testing response time in Chrome: example #1 - 6-10ms example #2 - 200-220ms But, if test both examples through nginx proxy_pass server{ listen 1011; location / { proxy_pass http://127.0.0.1:1001; } } i get this: example #1 - 4-8ms example #2 - 4-8ms I am not an expert on either nodejs or nginx, and asking if someone can explain this? nodejs - v.0.8.1 nginx - v.1.2.2

    Read the article

  • Is the old vector get cleared? If yes, how and when?

    - by user180866
    I have the following code: void foo() { vector<double> v(100,1); // line 1 // some code v = vector<double>(200,2); // line 2 // some code } what happened to the vector of size 100 after the second line? Is it gets cleared by itself? If the answer is yes, how and when it is cleared? By the way, is there any other "easy and clean" ways to change the vector as in line 2? I don't want things like v.resize(200); for (int i=0; i<200; i++) v[i] = 2;

    Read the article

  • Query Becnhmark.

    - by Deepika
    I am using autobech for doing becnhmark. An example of autobench command is as shown below. autobench --single_host --host1 testhost.foo.com --uri1 /index.html --quiet --timeout 5 --low_rate 20 --high_rate 200 --rate_step 20 --num_call 10 --num_conn 5000 --file bench.tsv The uri which I have to specify has a query attached to it. When I run the command which has the query, I get the following result dem_req_rate req_rate_localhost con_rate_localhost min_rep_rate_localhost avg_rep_rate_localhost max_rep_rate_localhost stddev_rep_rate_localhost resp_time_localhost net_io_localhost errors_localhost 200 0 20 0 0 0 0 0 0 101 400 0 40 0 0 0 0 0 0 101 600 0 60 0 0 0 0 0 0 101 800 0 80 0 0 0 0 0 0 101 1000 0 100 0 0 0 0 0 0 101 1200 0 120 0 0 0 0 0 0 101 1400 0 140 0 0 0 0 0 0 101 1600 0 160 0 0 0 0 0 0 101 1800 0 180 0 0 0 0 0 0 101 2000 0 200 0 0 0 0 0 0 101 The query request, response are all zeroes. Can anybody please tell me how to give a query as part of the uri? Thank you in advance

    Read the article

  • sql server bulk copy out/postgres copy from infile

    - by Chris Curvey
    I'm starting a conversion of a system from MS SQL Server to Postgres. I have the table structures converted, and I use "bcp" to get the data out of SQL Server. ERROR: invalid byte sequence for encoding "UTF8": 0x80 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". CONTEXT: COPY cm_outgoing, line 200: "200 c:\temp\200.xml 2009-10-10 01:50:44.000 1900-01-01 00:00:00.000" I've already used "sed" to get rid of the NUL (0x00) entries in the file, and I can't find any instances of 0x80 in the file that I'm trying to import. Any thoughts? Is there an easier way?

    Read the article

  • Sorting Multidimensional Array with Javascript: Integers

    - by tkm256
    I have a 2D array called "results." Each "row" array in results contains both string and integer values. I'm using this script to sort the array by any "column" on an onclick event: function sort_array(results, column, direction) { var sorted_results = results.sort(value); function value(a,b) { a = a[column]; b = b[column]; return a == b ? 0 : (a < b ? -1*direction : 1*direction) } } This works fine for the columns with strings. But it treats the columns of integers like strings instead of numbers. For example, the values 15, 1000, 200, 97 would be sorted 1000, 15, 200, 97 if "ascending" or 97, 200, 15, 1000 "descending." I've double-checked the typeof the integer values, and the script knows they're numbers. How can I get it to treat them as such?

    Read the article

  • Storing user settings in table - how?

    - by Mdillion
    I have settings for the user about 200 settings, these include notice settings and tracing settings from user activities on objects. The problem is how to store it in the DB? Should each setting be a row or a column? If colunm then table will have 200 colunms. If row then about 3 colunms but 200 rows per user x even 10 million users = not good. So how else can i store all these settings? NOTE: these settings are a mix of text entry and FK lookups to other tables. Thanks.

    Read the article

  • How to increase thread-pool threads on IIS 7.0

    - by Xaqron
    Environment: Windows Server 2008 Enterprise, IIS 7.0, ASP.NET 2.0 (CLR), .NET 4.0 I have an ASP.NET application with no page and no session(HttpHandler). It a streaming server. I use two threads for processing each request so if there are 100 connected clients, then 200 threads are used. This is a dedicated server and there's no more application on the server. The problem is after 200 clients are connected (under stress testing) application refuses new clients, but if I increase the worker threads of application pool (create a web garden) then I can have 200 new happy clients per w3wp process. I feel .NET thread pool limit reaches at that point and need to increase it. Thanks

    Read the article

  • Historical Rolling Daily sum

    - by user2980057
    I have a table of consisting of Dates and the amount of revenue recorded for that day going back for about 12 years. What I would like to do with this data is create a new table with Dates and prior 7-day revenue numbers. Any guidance would be greatly appreciated. Below is an example of what my source table and what my results would need to look like.... Source Table.. DATE | Revenue 12/31/2013 | 200 12/30/2013 | 300 12/29/2013 | 400 12/28/2013 | 100 12/27/2013 | 200 12/26/2013 | 150 12/25/2013 | 350 12/24/2013 | 450 12/23/2013 | 200 12/22/2013 | 300 12/21/2013 | 100 12/20/2013 | 300 Resulting Table... DATE | 7Dayrev 12/31/2013 | 1700 12/30/2013 | 1950 12/29/2013 | 1850 12/28/2013 | 1750 12/27/2013 | 1750 12/26/2013 | 1850 ETC......

    Read the article

  • SOA &amp; Application Grid Specialization &ndash; 6 steps to success &ndash; part 1 OMM

    - by Jürgen Kress
    SOA Specialization – Oracle Open Market Model (OMM) Dear Application Grid SOA Partners, Or goal is to SOA Specialize you, in the next weeks we will inform you in a series how you can achieve SOA Specialization. Specialization is key the be recognized by Oracle and to be preferred by our Customers. The first step to become SOA Specialized is to proof 2 transactions. You can either resell, co-sell or referral – as a proof point we do use our Open Market Model (OMM). To create your account go to our new Partner Portal: go to login of your OPN-Homepage: http://oraclepartnernetwork.oracle.com click on: "Sales" "Create a PRM User Account" Enter your User ID: Enter Company Identifier: ((please ask your OPN IC)) Finish Wait for a Confirmation Email If you need OMM support please contact out dedicated team: Nordics  please ask: [email protected] Portugal, Spain please ask: [email protected] Austria, Belgium, Germany, Luxembourg, Netherlands, Switzerland, United Arab Emirates, United Kingdom please ask: [email protected] For more information about OMM watch our on-demand webcast “Recognising the Value of Partners: Register Oracle Deals through the Open Market Model (OMM)”. Become SOA Specialized today SOA Specialized & Application Grid Specialized Create your references, create your OMM Entry, take the SOA Sales assessment, take the SOA Pre-Sales assessment, take the Support assessment and register for the SOA Implementation assessment. For more information on Specialization please visit our OPN Specialized Webcast Series To get support on Specialization please contact the Partner Business Centers.   SOA Specialized Application Grid Specialized Proof 2 transactions with OMM Proof 2 transactions with OMM Create your 2 references Create your 2 references SOA Sales assessment 3, Oracle Application Grid Sales Specialist  SOA Pre-Sales assessment 3 Oracle Application Grid PreSales Specialist Support assessment 1 Support assessment 2 SOA Implementation assessment 4 Application Grid Implementation assessment 4

    Read the article

  • game speed problem

    - by Meko
    HI..I made a little game.But this game works on every computer with different speed.I think it is about resolution.I used every thing in paintcomponent.and If I change screen size the game goes slower or faster.And if i run this game on another computer wich has different resolution it also works different. This is my game http://rapidshare.com/files/364597095/ShooterGame.2.6.0.jar and here code public class Shooter extends JFrame implements KeyListener, Runnable { JFrame frame = new JFrame(); String player; Font startFont, startSubFont, timerFont,healthFont; Image img; Image backGround; Graphics dbi; URL url1 = this.getClass().getResource("Images/p2.gif"); URL url2 = this.getClass().getResource("Images/p3.gif"); URL url3 = this.getClass().getResource("Images/p1.gif"); URL url4 = this.getClass().getResource("Images/p4.gif"); URL urlMap = this.getClass().getResource("Images/zemin.jpg"); Player p1 = new Player(5, 150, 10, 40, Color.GREEN, url3); Computer p2 = new Computer(750, 150, 10, 40, Color.BLUE, url1); Computer p3 = new Computer(0, 0, 10, 40, Color.BLUE, url2); Computer p4 = new Computer(0, 0, 10, 40, Color.BLUE, url4); ArrayList<Bullets> b = new ArrayList<Bullets>(); ArrayList<CBullets> cb = new ArrayList<CBullets>(); Thread sheap; boolean a, d, w, s; boolean toUp, toDown; boolean GameOver; boolean Level2; boolean newGame, resart, pause; int S, E; int random; int cbSpeed = 0; long timeStart, timeEnd; int timeElapsed; long GameStart, GameEnd; int GameScore; int Timer = 0; int timerStart, timerEnd; public Shooter() { sheap = new Thread(this); sheap.start(); startFont = new Font("Tiresias PCFont Z", Font.BOLD + Font.ITALIC, 32); startSubFont = new Font("Tiresias PCFont Z", Font.BOLD + Font.ITALIC, 25); timerFont = new Font("Tiresias PCFont Z", Font.BOLD + Font.ITALIC, 16); healthFont = new Font("Tiresias PCFont Z", Font.BOLD + Font.ITALIC, 16); setTitle("Shooter 2.5.1"); setBounds(350, 250, 800, 600); // setResizable(false); setBackground(Color.black); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); addKeyListener(this); a = d = w = s = false; toUp = toDown = true; GameOver = true; newGame = true; Level2 = false; S = E = 0; setVisible(true); } public void paint(Graphics g) { img = createImage(getWidth(), getHeight()); dbi = img.getGraphics(); paintComponent(dbi); g.drawImage(img, 0, 0, this); } public void paintComponent(Graphics g) { repaint(); timeStart = System.currentTimeMillis(); backGround = Toolkit.getDefaultToolkit().getImage(urlMap); g.drawImage(backGround, 0, 0, null); g.setColor(Color.red); g.setFont(healthFont); g.drawString("" + player + " Health : " + p1.health, 30, 50); g.setColor(Color.red); g.drawString("Computer Health : " + (p2.health + p3.health + p4.health), 600, 50); g.setColor(Color.BLUE); g.setFont(timerFont); g.drawString("Time : " + Timer, 330, 50); if (newGame) { g.setColor(Color.LIGHT_GRAY); g.setFont(startFont); g.drawString("Well Come To Shoot Game", 200, 190); g.drawString("Press ENTER To Start", 250, 220); g.setColor(Color.LIGHT_GRAY); g.setFont(startSubFont); g.drawString("Use W,A,S,D and Space For Fire", 200, 250); g.drawString("GOOD LUCK", 250, 280); newGame(); } if (!GameOver) { for (Bullets b1 : b) { b1.draw(g); } for (CBullets b2 : cb) { b2.draw(g); } update(); // Here MOvements for Player and For Fires } if (p1.health <= 0) { g.setColor(p2.col); g.setFont(startFont); g.drawString("Computer Wins ", 200, 190); g.drawString("Press Key R to Restart ", 200, 220); GameOver = true; } else if (p2.health <= 0 && p3.health <= 0 && p4.health <= 0) { g.setColor(p1.col); g.setFont(startFont); g.drawString(""+player+" Wins ", 200, 190); g.drawString("Press Key R to Resart ", 200, 220); GameOver = true; g.setColor(Color.MAGENTA); g.drawString(""+player+"`s Score is " + Timer, 200, 120); } if (Level2) { if (p3.health >= 0) { p3.draw(g); for (CBullets b3 : cb) { b3.draw(g); } } else { p3.x = 1000; } if (p4.health >= 0) { p4.draw(g); for (CBullets b4 : cb) { b4.draw(g); } } else { p4.x = 1000; } } if (p1.health >= 0) { p1.draw(g); } if (p2.health >= 0) { p2.draw(g); } else { p2.x = 1000; } } public void update() { if (w && p1.y > 54) { p1.moveUp(); } if (s && p1.y < 547) { p1.moveDown(); } if (a && p1.x > 0) { p1.moveLeft(); } if (d && p1.x < 200) { p1.moveRight(); } random = 1 * (int) (Math.random() * 100); if (random > 96) { if (p2.health >= 0) { CBullets bo = p2.getCBull(); bo.xVel =-1-cbSpeed; cb.add(bo); } if (Level2) { if (p3.health >= 0) { CBullets bo1 = p3.getCBull(); bo1.xVel = -2-cbSpeed; cb.add(bo1); } if (p4.health >= 0) { CBullets bo2 = p4.getCBull(); bo2.xVel = -4-cbSpeed; cb.add(bo2); } } } if (S == 1) { if (p1.health >= 0) { Bullets bu = p1.getBull(); bu.xVel = 5; b.add(bu); S += 1; } } //Here Also Problem .. When COmputer have More fire then it gaves Array Exeption . Or Player have More Fire for (int i = cb.size() -1; i = 0 ; i--) { boolean bremoved = false; for (int j = b.size() -1 ; j =0 ; j--) { if (b.get(j).rect.intersects(cb.get(i).rect) || cb.get(i).rect.intersects(b.get(j).rect)) { bremoved = true; b.remove(j); } } if(bremoved) cb.remove(i); } for (int i = 0; i < b.size(); i++) { b.get(i).move(); if (b.get(i).rect.intersects(p2.rect)) { if (p2.health >= 0) { p2.health--; b.remove(i); // System.out.println("Hited P2"); i--; continue; } } if (b.get(i).rect.intersects(p3.rect)) { if (p3.health >= 0) { p3.health--; b.remove(i); // System.out.println("Hited P3"); i--; continue; } } if (b.get(i).rect.intersects(p4.rect)) { if (p4.health >= 0) { p4.health--; b.remove(i); // System.out.println("Hited P4"); i--; continue; } } if (b.get(i).rect.x > 790) { b.remove(i); } } for (int j = 0; j < cb.size(); j++) { cb.get(j).move(); if (cb.get(j).rect.intersects(p1.rect) && cb.get(j).xVel < 0) { p1.health--; cb.remove(j); j--; continue; } } timeEnd = System.currentTimeMillis(); timeElapsed = (int) (timeEnd - timeStart); } public void level2() { if (p2.health <= 10) { Level2 = true; cbSpeed = 4; p3.x = 750; p4.x = 750; p2.speed = 10; p3.speed = 20; p4.speed = 30; } } public void keyTyped(KeyEvent e) { } public void keyPressed(KeyEvent e) { switch (e.getKeyCode()) { case KeyEvent.VK_ENTER: newGame = false; break; case KeyEvent.VK_P: pause = true; break; case KeyEvent.VK_R: resart = true; break; case KeyEvent.VK_A: a = true; break; case KeyEvent.VK_D: d = true; break; case KeyEvent.VK_W: w = true; break; case KeyEvent.VK_S: s = true; break; case KeyEvent.VK_SPACE: S += 1; break; } } public void keyReleased(KeyEvent e) { switch (e.getKeyCode()) { case KeyEvent.VK_A: a = false; break; case KeyEvent.VK_D: d = false; break; case KeyEvent.VK_W: w = false; break; case KeyEvent.VK_S: s = false; break; case KeyEvent.VK_SPACE: S = 0; break; } } public void newGame() { p1.health = 20; p2.health = 20; p3.health = 20; p4.health = 20; p3.x = 0; p4.x = 0; p2.x = 750; Level2 = false; cbSpeed = 0; p2.speed = 9; b.removeAll(b); cb.removeAll(cb); timerStart = (int) System.currentTimeMillis(); GameOver = false; } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { KeyListener k = new Shooter(); } }); } @Override public void run() { player = JOptionPane.showInputDialog(frame, "Enter Player Name", "New Player", JOptionPane.DEFAULT_OPTION); while (true) { timerEnd = (int) System.currentTimeMillis(); if (resart) { newGame(); resart = false; } if (pause) { Thread.currentThread().notify(); } try { if (!GameOver) { Timer = timerEnd - timerStart; level2(); if (p1.y < p2.y && p2.y60) { p2.moveUp(); } if (p1.y < p3.y && p3.y43) { p3.moveUp(); } if (p1.y < p4.y && p4.y43) { p4.moveUp(); } if (p1.y > p2.y && p2.y<535) { p2.moveDown(); } if (p1.y > p3.y && p3.y<535) { p3.moveDown(); } if (p1.y > p4.y && p4.y<530) { p4.moveDown(); } } if (timeElapsed < 125) { Thread.currentThread().sleep(125); } } catch (InterruptedException ex) { System.out.print("FInished"); } } } }

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Fixing predicated NSFetchedResultsController/NSFetchRequest performance with SQLite backend?

    - by Jaanus
    I have a series of NSFetchedResultsControllers powering some table views, and their performance on device was abysmal, on the order of seconds. Since it all runs on main thread, it's blocking my app at startup, which is not great. I investigated and turns out the predicate is the problem: NSPredicate *somePredicate = [NSPredicate predicateWithFormat:@"ANY somethings == %@", something]; [fetchRequest setPredicate:somePredicate]; I.e the fetch entity, call it "things", has a many-to-many relation with entity "something". This predicate is a filter that limits the results to only things that have a relation with a particular "something". When I removed the predicate for testing, fetch time (the initial performFetch: call) dropped (for some extreme cases) from 4 seconds to around 100ms or less, which is acceptable. I am troubled by this, though, as it negates a lot of the benefit I was hoping to gain with Core Data and NSFRC, which otherwise seems like a powerful tool. So, my question is, how can I optimize this performance? Am I using the predicate wrong? Should I modify the model/schema somehow? And what other ways there are to fix this? Is this kind of degraded performance to be expected? (There are on the order of hundreds of <1KB objects.) EDIT WITH DETAILS: Here's the code: [fetchRequest setFetchLimit:200]; NSLog(@"before fetch"); BOOL success = [frc performFetch:&error]; if (!success) { NSLog(@"Fetch request error: %@", error); } NSLog(@"after fetch"); Updated logs (previously, I had some application inefficiencies degrading the performance here. These are the updated logs that should be as close to optimal as you can get under my current environment): 2010-02-05 12:45:22.138 Special Ppl[429:207] before fetch 2010-02-05 12:45:22.144 Special Ppl[429:207] CoreData: sql: SELECT DISTINCT 0, t0.Z_PK, t0.Z_OPT, <model fields> FROM ZTHING t0 LEFT OUTER JOIN Z_1THINGS t1 ON t0.Z_PK = t1.Z_2THINGS WHERE t1.Z_1SOMETHINGS = ? ORDER BY t0.ZID DESC LIMIT 200 2010-02-05 12:45:22.663 Special Ppl[429:207] CoreData: annotation: sql connection fetch time: 0.5094s 2010-02-05 12:45:22.668 Special Ppl[429:207] CoreData: annotation: total fetch execution time: 0.5240s for 198 rows. 2010-02-05 12:45:22.706 Special Ppl[429:207] after fetch If I do the same fetch without predicate (by commenting out the two lines in the beginning of the question): 2010-02-05 12:44:10.398 Special Ppl[414:207] before fetch 2010-02-05 12:44:10.405 Special Ppl[414:207] CoreData: sql: SELECT 0, t0.Z_PK, t0.Z_OPT, <model fields> FROM ZTHING t0 ORDER BY t0.ZID DESC LIMIT 200 2010-02-05 12:44:10.426 Special Ppl[414:207] CoreData: annotation: sql connection fetch time: 0.0125s 2010-02-05 12:44:10.431 Special Ppl[414:207] CoreData: annotation: total fetch execution time: 0.0262s for 200 rows. 2010-02-05 12:44:10.457 Special Ppl[414:207] after fetch 20-fold difference in times. 500ms is not that great, and there does not seem to be a way to do it in background thread or otherwise optimize that I can think of. (Apart from going to a binary store where this becomes a non-issue, so I might do that. Binary store performance is consistently ~100ms for the above 200-object predicated query.) (I nested another question here previously, which I now moved away).

    Read the article

  • Drawing a WPF UserControl with DataBinding to an Image

    - by LorenVS
    Hey Everyone, So I'm trying to use a WPF User Control to generate a ton of images from a dataset where each item in the dataset would produce an image... I'm hoping I can set it up in such a way that I can use WPF databinding, and for each item in the dataset, create an instance of my user control, set the dependency property that corresponds to my data item, and then draw the user control to an image, but I'm having problems getting it all working (not sure whether databinding or drawing to the image is my problem) Sorry for the massive code dump, but I've been trying to get this working for a couple of hours now, and WPF just doesn't like me (have to learn at some point though...) My User Control looks like this: <UserControl x:Class="Bleargh.ImageTemplate" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:c="clr-namespace:Bleargh" x:Name="ImageTemplateContainer" Height="300" Width="300"> <Canvas> <TextBlock Canvas.Left="50" Canvas.Top="50" Width="200" Height="25" FontSize="16" FontFamily="Calibri" Text="{Binding Path=Booking.Customer,ElementName=ImageTemplateContainer}" /> <TextBlock Canvas.Left="50" Canvas.Top="100" Width="200" Height="25" FontSize="16" FontFamily="Calibri" Text="{Binding Path=Booking.Location,ElementName=ImageTemplateContainer}" /> <TextBlock Canvas.Left="50" Canvas.Top="150" Width="200" Height="25" FontSize="16" FontFamily="Calibri" Text="{Binding Path=Booking.ItemNumber,ElementName=ImageTemplateContainer}" /> <TextBlock Canvas.Left="50" Canvas.Top="200" Width="200" Height="25" FontSize="16" FontFamily="Calibri" Text="{Binding Path=Booking.Description,ElementName=ImageTemplateContainer}" /> </Canvas> </UserControl> And I've added a dependency property of type "Booking" to my user control that I'm hoping will be the source for the databound values: public partial class ImageTemplate : UserControl { public static readonly DependencyProperty BookingProperty = DependencyProperty.Register("Booking", typeof(Booking), typeof(ImageTemplate)); public Booking Booking { get { return (Booking)GetValue(BookingProperty); } set { SetValue(BookingProperty, value); } } public ImageTemplate() { InitializeComponent(); } } And I'm using the following code to render the control: List<Booking> bookings = Booking.GetSome(); for(int i = 0; i < bookings.Count; i++) { ImageTemplate template = new ImageTemplate(); template.Booking = bookings[i]; RenderTargetBitmap bitmap = new RenderTargetBitmap( (int)template.Width, (int)template.Height, 120.0, 120.0, PixelFormats.Pbgra32); bitmap.Render(template); BitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(bitmap)); using (Stream s = File.OpenWrite(@"C:\Code\Bleargh\RawImages\" + i.ToString() + ".png")) { encoder.Save(s); } } EDIT: I should add that the process works without any errors whatsoever, but I end up with a directory full of plain-white images, not text or anything... And I have confirmed using the debugger that my Booking objects are being filled with the proper data... EDIT 2: Did something I should have done a long time ago, set a background on my canvas, but that didn't change the output image at all, so my problem is most definitely somehow to do with my drawing code (although there may be something wrong with my databinding too)

    Read the article

  • Google Webmaster Tools reports fake 404 errors

    - by Edgar Quintero
    I have a website where Google Webmaster Tools reports 15,000 links as 404 errors. However, all links return a 200 when I visit them. The problem is, that eventhough I can visit these pages and return a 200, all those 15,000 pages won't index in Google. They aren't appearing in search results. These are constant errors Google Webmaster Tools keeps reporting and I'm not sure what the problem is. We've thought of a DNS issue, but it shouldn't be a DNS issue, because if it were, no page would be indexed (I have 10,000 perfectly indexed). Regarding URL parameters, my pages do not share a similarity in URL parameters that can make it obvious to me what could be causing the error.

    Read the article

  • Ingame menu is not working correctly

    - by Johnny
    The ingame menu opens when the player presses Escape during the main game. If the player presses Y in the ingame menu, the game switches to the main menu. Up to here, everything works. But: On the other hand, if the player presses N in the ingame menu, the game should switch back to the main game(should resume the main game). But that doesn't work. The game just rests in the ingame menu if the player presses N. I set a breakpoint in this line of the Ingamemenu class: KeyboardState kbState = Keyboard.GetState(); CurrentSate/currentGameState and LastState/lastGameState have the same state: IngamemenuState. But LastState/lastGameState should not have the same state than CurrentSate/currentGameState. What is wrong? Why is the ingame menu not working correctly? public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; IState lastState, currentState; public enum GameStates { IntroState = 0, MenuState = 1, MaingameState = 2, IngamemenuState = 3 } public void ChangeGameState(GameStates newState) { lastGameState = currentGameState; lastState = currentState; switch (newState) { case GameStates.IntroState: currentState = new Intro(this); currentGameState = GameStates.IntroState; break; case GameStates.MenuState: currentState = new Menu(this); currentGameState = GameStates.MenuState; break; case GameStates.MaingameState: currentState = new Maingame(this); currentGameState = GameStates.MaingameState; break; case GameStates.IngamemenuState: currentState = new Ingamemenu(this); currentGameState = GameStates.IngamemenuState; break; } currentState.Load(Content); } public void ChangeCurrentToLastGameState() { currentGameState = lastGameState; currentState = lastState; } public GameStates CurrentState { get { return currentGameState; } set { currentGameState = value; } } public GameStates LastState { get { return lastGameState; } set { lastGameState = value; } } private GameStates currentGameState = GameStates.IntroState; private GameStates lastGameState; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { ChangeGameState(GameStates.IntroState); base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); currentState.Load(Content); } protected override void Update(GameTime gameTime) { currentState.Update(gameTime); if ((lastGameState == GameStates.MaingameState) && (currentGameState == GameStates.IngamemenuState)) { lastState.Update(gameTime); } base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); if ((lastGameState == GameStates.MaingameState) && (currentGameState == GameStates.IngamemenuState)) { lastState.Render(spriteBatch); } currentState.Render(spriteBatch); spriteBatch.End(); base.Draw(gameTime); } } public interface IState { void Load(ContentManager content); void Update(GameTime gametime); void Render(SpriteBatch batch); } public class Intro : IState { Texture2D Titelbildschirm; private Game1 game1; public Intro(Game1 game) { game1 = game; } public void Load(ContentManager content) { Titelbildschirm = content.Load<Texture2D>("gruft"); } public void Update(GameTime gametime) { KeyboardState kbState = Keyboard.GetState(); if (kbState.IsKeyDown(Keys.Space)) game1.ChangeGameState(Game1.GameStates.MenuState); } public void Render(SpriteBatch batch) { batch.Draw(Titelbildschirm, new Rectangle(0, 0, 1280, 720), Color.White); } } public class Menu:IState { Texture2D Choosescreen; private Game1 game1; public Menu(Game1 game) { game1 = game; } public void Load(ContentManager content) { Choosescreen = content.Load<Texture2D>("menubild"); } public void Update(GameTime gametime) { KeyboardState kbState = Keyboard.GetState(); if (kbState.IsKeyDown(Keys.Enter)) game1.ChangeGameState(Game1.GameStates.MaingameState); if (kbState.IsKeyDown(Keys.Escape)) game1.Exit(); } public void Render(SpriteBatch batch) { batch.Draw(Choosescreen, new Rectangle(0, 0, 1280, 720), Color.White); } } public class Maingame : IState { Texture2D Spielbildschirm, axe; Vector2 position = new Vector2(100,100); private Game1 game1; public Maingame(Game1 game) { game1 = game; } public void Load(ContentManager content) { Spielbildschirm = content.Load<Texture2D>("hauszombie"); axe = content.Load<Texture2D>("axxx"); } public void Update(GameTime gametime) { KeyboardState keyboardState = Keyboard.GetState(); float delta = (float)gametime.ElapsedGameTime.TotalSeconds; position.X += 5 * delta; position.Y += 3 * delta; if (keyboardState.IsKeyDown(Keys.Escape)) game1.ChangeGameState(Game1.GameStates.IngamemenuState); } public void Render(SpriteBatch batch) { batch.Draw(Spielbildschirm, new Rectangle(0, 0, 1280, 720), Color.White); batch.Draw(axe, position, Color.White); } } public class Ingamemenu : IState { Texture2D Quitscreen; private Game1 game1; public Ingamemenu(Game1 game) { game1 = game; } public void Load(ContentManager content) { Quitscreen = content.Load<Texture2D>("quit"); } public void Update(GameTime gametime) { KeyboardState kbState = Keyboard.GetState(); if (kbState.IsKeyDown(Keys.Y)) game1.ChangeGameState(Game1.GameStates.MenuState); if (kbState.IsKeyDown(Keys.N)) game1.ChangeCurrentToLastGameState(); } public void Render(SpriteBatch batch) { batch.Draw(Quitscreen, new Rectangle(200, 200, 200, 200), Color.White); } }

    Read the article

  • Bingbot requests from Google IP address

    - by JITHIN JOSE
    We have some suspicious requests to our server, 74.125.186.46 - - [24/Aug/2014:23:24:11 -0500] "GET <url> HTTP/1.1" 200 16912 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" 74.125.187.193 - - [24/Aug/2014:23:24:12 -0500] "GET <url> HTTP/1.1" 200 20119 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" As it shows, user-agent shows it is bingbot. But whois data of IP address(74.125.186.46 and 74.125.187.193) shows it is from google servers. So is it Google,Bing or any other content scrappers?

    Read the article

  • Is it possible to make a MS-SQL Scalar function do this?

    - by Hokken
    I have a 3rd party application that can call a MS-SQL scalar function to return some summary information from a table. I was able to return the summary values with a table-valued function, but the application won't utilize table-valued functions, so I'm kind of stuck. Here's a sample from the table: trackingNumber, projTaskAward, expenditureType, amount 1122, 12345-67-89, Supplies, 100 1122, 12345-67-89, Supplies, 150 1122, 12345-67-89, Supplies, 250 1122, 12345-67-89, Misc, 50 1122, 12345-67-89, Misc, 100 1122, 98765-43-21, General, 200 1122, 98765-43-21, Conference, 500 1122, 98765-43-21, Misc, 300 1122, 98765-43-21, Misc, 100 1122, 98765-43-21, Misc, 100 I want to summarize the amounts by projTaskAward & expenditureType, based on the trackingNumber. Here is the output I'm looking for: Proj/Task/Award: 12345-67-89 Expenditure Type: Supplies Total: 500 Proj/Task/Award: 12345-67-89 Expenditure Type: Misc Total: 150 Proj/Task/Award: 98765-43-21 Expenditure Type: General Total: 200 Proj/Task/Award: 98765-43-21 Expenditure Type: Conference Total: 500 Proj/Task/Award: 98765-43-21 Expenditure Type: Misc Total: 500 I'd appreciate any help anyone can give in steering me in the right direction.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >