Search Results

Search found 2207 results on 89 pages for 'nick locking'.

Page 85/89 | < Previous Page | 81 82 83 84 85 86 87 88 89  | Next Page >

  • Stuck in Infinite Loop while PostInvalidating

    - by Nicholas Roge
    I'm trying to test something, however, the loop I'm using keeps getting stuck while running. It's just a basic lock thread while doing something else before continuing kind of loop. I've double checked that I'm locking AND unlocking the variable I'm using, but regardless it's still stuck in the loop. Here are the segments of code I have that cause the problem: ActualGame.java: Thread thread=new Thread("Dialogue Thread"){ @Override public void run(){ Timer fireTimer=new Timer(); int arrowSequence=0; gameHandler.setOnTouchListener( new OnTouchListener(){ @Override public boolean onTouch(View v, MotionEvent me) { //Do something. if(!gameHandler.fireTimer.getActive()){ exitLoop=true; } return false; } } ); while(!exitLoop){ while(fireTimer.getActive()||!gameHandler.drawn); c.drawBitmap(SpriteSheet.createSingleBitmap(getResources(), R.drawable.dialogue_box,240,48),-48,0,null); c.drawBitmap(SpriteSheet.createSingleBitmap(getResources(),R.drawable.dialogue_continuearrow,32,16,8,16,arrowSequence,0),-16,8,null); gameHandler.drawn=false; gameHandler.postInvalidate(); if(arrowSequence+1==4){ arrowSequence=0; exitLoop=true; }else{ arrowSequence++; } fireTimer.startWait(100); } gameHandler.setOnTouchListener(gameHandler.defaultOnTouchListener); } }; thread.run(); And the onDraw method of GameHandler: canvas.scale(scale,scale); canvas.translate(((screenWidth/2)-((terrainWidth*scale)/2))/scale,((screenHeight/2)-((terrainHeight*scale)/2))/scale); canvas.drawColor(Color.BLACK); for(int layer=0;layer(less than)tiles.length;layer++){ if(layer==playerLayer){ canvas.drawBitmap(playerSprite.getCurrentSprite(), playerSprite.getPixelLocationX(), playerSprite.getPixelLocationY(), null); continue; } for(int y=0;y(less than)tiles[layer].length;y++){ for(int x=0;x(less than)tiles[layer][y].length;x++){ if(layer==0&&tiles[layer][y][x]==null){ tiles[layer][y][x]=nullTile; } if(tiles[layer][y][x]!=null){ runningFromTileEvent=false; canvas.drawBitmap(tiles[layer][y][x].associatedSprite.getCurrentSprite(),x*tiles[layer][y][x].associatedSprite.spriteWidth,y*tiles[layer][y][x].associatedSprite.spriteHeight,null); } } } } for(int i=0;i(less than)canvasEvents.size();i++){ if(canvasEvents.elementAt(i).condition(this)){ canvasEvents.elementAt(i).run(canvas,this); } } Log.e("JapaneseTutor","Got here.[1]"); drawn=true; Log.e("JapaneseTutor","Got here.[2]"); If you need to see the Timer class, or the full length of the GameHandler or ActualGame classes, just let me know.

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

  • mysql: can't set max_allowed_package to anything grater than 16MB

    - by sas
    I'm not sure if this is the right place to post these kind of questions, if it's not so, please (politely) let me know... :-) I need to save files greater than 16MB on a mysql database from a php site... I've already changed the c:\xampp\mysql\bin\my.cnf and set max_allowed_packet to 16 MB, and everything worked fine then I set it to 32 MB but there´s no way I can handle a file bigger than 16 MB I get the following error: 'MySQL server has gone away' (the same error I had when max_allowed_packet was set to 1MB) there must be some other setting that doesn´t allow me to handle files bigger than 16MB maybe the php client, I guess, but I don't know where to edit it this is the code I'm running when file.txt is smaller than 16.776.192 bytes long, it works fine, but if file.txt has 16.777.216 bytes i get the aforementioned error oh, and the field download.content is a longblob... $file = 'file.txt'; $file_handle = fopen( $file, 'r' ); $content = fread( $file_handle, filesize( $file ) ); fclose( $file_handle ); db_execute( 'truncate table download', true ); $sql = "insert into download( code, title, name, description, original_name, mime_type, size, content, user_insert_id, date_insert, user_update_id, date_update ) values ( 'new file', 'new file', 'sas.jpg', 'new file', '$file', 'mime', " . filesize( $file ) . ", '" . addslashes( $content ) . "', 0, " . db_char_to_sql( now_char(), 'datetime' ) . ", 0, " . db_char_to_sql( now_char(), 'datetime' ) . " )"; db_execute( $sql, true ); (the db_execute funcion just opens the connections and executes the sql stuff) running on windows XP sp2 server version: 5.0.67-community PHP Version 4.4.9 mysql client API version: 3.23.49 using: ApacheFriends XAMPP (Basispaket) version 1.6.8 that comes with + Apache 2.2.9 + MySQL 5.0.67 (Community Server) + PHP 5.2.6 + PHP 4.4.9 + PEAR + phpMyAdmin 2.11.9.2 ... this is part of the content of c:\xampp\mysql\bin\my.cnf # The MySQL server [mysqld] port= 3306 socket= "C:/xampp/mysql/mysql.sock" basedir="C:/xampp/mysql" tmpdir="C:/xampp/tmp" datadir="C:/xampp/mysql/data" skip-locking key_buffer = 16M # max_allowed_packet = 1M max_allowed_packet = 32M table_cache = 128 sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M

    Read the article

  • Can't destroy record in many-to-many relationship

    - by Dmart
    I'm new to Rails, so I'm sure I've made a simple mistake. I've set up a many-to-many relationship between two models: User and Group. They're connected through the junction model GroupMember. Here are my models (removed irrelevant stuff): class User < ActiveRecord::Base has_many :group_members has_many :groups, :through => :group_members end class GroupMember < ActiveRecord::Base belongs_to :group belongs_to :user end class Group < ActiveRecord::Base has_many :group_members has_many :users, :through => :group_members end The table for GroupMembers contains additional information about the relationship, so I didn't use has_and_belongs_to_many (as per the Rails "Active Record Associations" guide). The problem I'm having is that I can't destroy a GroupMember. Here's the output from rails console: irb(main):006:0> m = GroupMember.new => #<GroupMember group_id: nil, user_id: nil, active: nil, created_at: nil, updated_at: nil> irb(main):007:0> m.group_id =1 => 1 irb(main):008:0> m.user_id = 16 => 16 irb(main):009:0> m.save => true irb(main):010:0> m.destroy NoMethodError: undefined method `eq' for nil:NilClass from /usr/local/lib/ruby/gems/1.8/gems/activesupport-3.0.4/lib/active_support/whiny_nil.rb:48:in `method_missing' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-3.0.4/lib/active_record/persistence.rb:79:in `destroy' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-3.0.4/lib/active_record/locking/optimistic.rb:110:in `destroy' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-3.0.4/lib/active_record/callbacks.rb:260:in `destroy' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-3.0.4/lib/active_support/callbacks.rb:413:in `_run_destroy_callbacks' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-3.0.4/lib/active_record/callbacks.rb:260:in `destroy' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-3.0.4/lib/active_record/transactions.rb:235:in `destroy' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-3.0.4/lib/active_record/transactions.rb:292:in `with_transaction_returning_status' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-3.0.4/lib/active_record/connection_adapters/abstract/database_statements.rb:139:in `transaction' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-3.0.4/lib/active_record/transactions.rb:207:in `transaction' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-3.0.4/lib/active_record/transactions.rb:290:in `with_transaction_returning_status' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-3.0.4/lib/active_record/transactions.rb:235:in `destroy' from (irb):10 This is driving me crazy, so any help would be greatly appreciated.

    Read the article

  • Problem setting row backgrounds in Android Listview

    - by zchtodd
    I have an application in which I'd like one row at a time to have a certain color. This seems to work about 95% of the time, but sometimes instead of having just one row with this color, it will allow multiple rows to have the color. Specifically, a row is set to have the "special" color when it is tapped. In rare instances, the last row tapped will retain the color despite a call to setBackgroundColor attempting to make it otherwise. private OnItemClickListener mDirectoryListener = new OnItemClickListener(){ public void onItemClick(AdapterView parent, View view, int pos, long id){ if (stdir.getStationCount() == pos) { stdir.moreStations(); return; } if (playingView != null) playingView.setBackgroundColor(Color.DKGRAY); view.setBackgroundColor(Color.MAGENTA); playingView = view; playStation(pos); } }; I have confirmed with print statements that the code setting the row to gray is always called. Can anyone imagine a reason why this code might intermittently fail? If there is a pattern or condition that causes it, I can't tell. I thought it might have something to do with the activity lifecycle setting the "playingView" variable back to null, but I can't reliably reproduce the problem by switching activities or locking the phone. private class DirectoryAdapter extends ArrayAdapter { private ArrayList<Station> items; public DirectoryAdapter(Context c, int resLayoutId, ArrayList<Station> stations){ super(c, resLayoutId, stations); this.items = stations; } public int getCount(){ return items.size() + 1; } public View getView(int position, View convertView, ViewGroup parent){ View v = convertView; LayoutInflater vi = (LayoutInflater)getContext().getSystemService(Context.LAYOUT_INFLATER_SERVICE); if (position == this.items.size()) { v = vi.inflate(R.layout.morerow, null); return v; } Station station = this.items.get(position); v = vi.inflate(R.layout.songrow, null); if (station.playing) v.setBackgroundColor(Color.MAGENTA); else if (station.visited) v.setBackgroundColor(Color.DKGRAY); else v.setBackgroundColor(Color.BLACK); TextView title = (TextView)v.findViewById(R.id.title); title.setText(station.name); return v; } };

    Read the article

  • Using Core Data Concurrently and Reliably

    - by John Topley
    I'm building my first iOS app, which in theory should be pretty straightforward but I'm having difficulty making it sufficiently bulletproof for me to feel confident submitting it to the App Store. Briefly, the main screen has a table view, upon selecting a row it segues to another table view that displays information relevant for the selected row in a master-detail fashion. The underlying data is retrieved as JSON data from a web service once a day and then cached in a Core Data store. The data previous to that day is deleted to stop the SQLite database file from growing indefinitely. All data persistence operations are performed using Core Data, with an NSFetchedResultsController underpinning the detail table view. The problem I am seeing is that if you switch quickly between the master and detail screens several times whilst fresh data is being retrieved, parsed and saved, the app freezes or crashes completely. There seems to be some sort of race condition, maybe due to Core Data importing data in the background whilst the main thread is trying to perform a fetch, but I'm speculating. I've had trouble capturing any meaningful crash information, usually it's a SIGSEGV deep in the Core Data stack. The table below shows the actual order of events that happen when the detail table view controller is loaded: Main Thread Background Thread viewDidLoad Get JSON data (using AFNetworking) Create child NSManagedObjectContext (MOC) Parse JSON data Insert managed objects in child MOC Save child MOC Post import completion notification Receive import completion notification Save parent MOC Perform fetch and reload table view Delete old managed objects in child MOC Save child MOC Post deletion completion notification Receive deletion completion notification Save parent MOC Once the AFNetworking completion block is triggered when the JSON data has arrived, a nested NSManagedObjectContext is created and passed to an "importer" object that parses the JSON data and saves the objects to the Core Data store. The importer executes using the new performBlock method introduced in iOS 5: NSManagedObjectContext *child = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; [child setParentContext:self.managedObjectContext]; [child performBlock:^{ // Create importer instance, passing it the child MOC... }]; The importer object observes its own MOC's NSManagedObjectContextDidSaveNotification and then posts its own notification which is observed by the detail table view controller. When this notification is posted the table view controller performs a save on its own (parent) MOC. I use the same basic pattern with a "deleter" object for deleting the old data after the new data for the day has been imported. This occurs asynchronously after the new data has been fetched by the fetched results controller and the detail table view has been reloaded. One thing I am not doing is observing any merge notifications or locking any of the managed object contexts or the persistent store coordinator. Is this something I should be doing? I'm a bit unsure how to architect this all correctly so would appreciate any advice.

    Read the article

  • Having trouble wrapping functions in the linux kernel

    - by Corey Henderson
    I've written a LKM that implements Trusted Path Execution (TPE) into your kernel: https://github.com/cormander/tpe-lkm I run into an occasional kernel OOPS (describe at the end of this question) when I define WRAP_SYSCALLS to 1, and am at my wit's end trying to track it down. A little background: Since the LSM framework doesn't export its symbols, I had to get creative with how I insert the TPE checking into the running kernel. I wrote a find_symbol_address() function that gives me the address of any function I need, and it works very well. I can call functions like this: int (*my_printk)(const char *fmt, ...); my_printk = find_symbol_address("printk"); (*my_printk)("Hello, world!\n"); And it works fine. I use this method to locate the security_file_mmap, security_file_mprotect, and security_bprm_check functions. I then overwrite those functions with an asm jump to my function to do the TPE check. The problem is, the currently loaded LSM will no longer execute the code for it's hook to that function, because it's been totally hijacked. Here is an example of what I do: int tpe_security_bprm_check(struct linux_binprm *bprm) { int ret = 0; if (bprm->file) { ret = tpe_allow_file(bprm->file); if (IS_ERR(ret)) goto out; } #if WRAP_SYSCALLS stop_my_code(&cs_security_bprm_check); ret = cs_security_bprm_check.ptr(bprm); start_my_code(&cs_security_bprm_check); #endif out: return ret; } Notice the section between the #if WRAP_SYSCALLS section (it's defined as 0 by default). If set to 1, the LSM's hook is called because I write the original code back over the asm jump and call that function, but I run into an occasional kernel OOPS with an "invalid opcode": invalid opcode: 0000 [#1] SMP RIP: 0010:[<ffffffff8117b006>] [<ffffffff8117b006>] security_bprm_check+0x6/0x310 I don't know what the issue is. I've tried several different types of locking methods (see the inside of start/stop_my_code for details) to no avail. To trigger the kernel OOPS, write a simple bash while loop that endlessly starts a backgrounded "ls" command. After a minute or so, it'll happen. I'm testing this on a RHEL6 kernel, also works on Ubuntu 10.04 LTS (2.6.32 x86_64). While this method has been the most successful so far, I have tried another method of simply copying the kernel function to a pointer I created with kmalloc but when I try to execute it, I get: kernel tried to execute NX-protected page - exploit attempt? (uid: 0). If anyone can tell me how to kmalloc space and have it marked as executable, that would also help me solve the above problem. Any help is appreciated!

    Read the article

  • What IPC method should I use between Firefox extension and C# code running on the same machine?

    - by Rory
    I have a question about how to structure communication between a (new) Firefox extension and existing C# code. The firefox extension will use configuration data and will produce other data, so needs to get the config data from somewhere and save it's output somewhere. The data is produced/consumed by existing C# code, so I need to decide how the extension should interact with the C# code. Some pertinent factors: It's only running on windows, in a relatively controlled corporate environment. I have a windows service running on the machine, built in C#. Storing the data in a local datastore (like sqlite) would be useful for other reasons. The volume of data is low, e.g. 10kb of uncompressed xml every few minutes, and isn't very 'chatty'. The data exchange can be asynchronous for the most part if not completely. As with all projects, I have limited resources so want an option that's relatively easy. It doesn't have to be ultra-high performance, but shouldn't add significant overhead. I'm planning on building the extension in javascript (although could be convinced otherwise if really necessary) Some options I'm considering: use an XPCOM to .NET/COM bridge use a sqlite db: the extension would read from and save to it. The c# code would run in the service, populating the db and then processing data created by the service. use TCP sockets to communicate between the extension and the service. Let the service manage a local data store. My problem with (1) is I think this will be tricky and not so easy. But I could be completely wrong? The main problem I see with (2) is the locking of sqlite: only a single process can write data at a time so there'd be some blocking. However, it would be nice generally to have a local datastore so this is an attractive option if the performance impact isn't too great. I don't know whether (3) would be particularly easy or hard ... or what approach to take on the protocol: something custom or http. Any comments on these ideas or other suggestions? UPDATE: I was planning on building the extension in javascript rather than c++

    Read the article

  • Solve a maze using multicores?

    - by acidzombie24
    This question is messy, i dont need a working solution, i need some psuedo code. How would i solve this maze? This is a homework question. I have to get from point green to red. At every fork i need to 'spawn a thread' and go that direction. I need to figure out how to get to red but i am unsure how to avoid paths i already have taken (finishing with any path is ok, i am just not allowed to go in circles). Heres an example of my problem, i start by moving down and i see a fork so one goes right and one goes down (or this thread can take it, it doesnt matter). Now lets ignore the rest of the forks and say the one going right hits the wall, goes down, hits the wall and goes left, then goes up. The other thread goes down, hits the wall then goes all the way right. The bottom path has been taken twice, by starting at different sides. How do i mark this path has been taken? Do i need a lock? Is this the only way? Is there a lockless solution? Implementation wise i was thinking i could have the maze something like this. I dont like the solution because there is a LOT of locking (assuming i lock before each read and write of the haveTraverse member). I dont need to use the MazeSegment class below, i just wrote it up as an example. I am allowed to construct the maze however i want. I was thinking maybe the solution requires no connecting paths and thats hassling me. Maybe i could split the map up instead of using the format below (which is easy to read and understand). But if i knew how to split it up i would know how to walk it thus the problem. How do i walk this maze efficiently? The only hint i receive was dont try to conserve memory by reusing it, make copies. However that was related to a problem with ordering a list and i dont think the hint was a hint for this. class MazeSegment { enum Direction { up, down, left, right} List<Pair<Direction, MazeSegment*>> ConnectingPaths; int line_length; bool haveTraverse; } MazeSegment root; class MazeSegment { enum Direction { up, down, left, right} List<Pair<Direction, MazeSegment*>> ConnectingPaths; bool haveTraverse; } void WalkPath(MazeSegment segment) { if(segment.haveTraverse) return; segment.haveTraverse = true; foreach(var v in segment) { if(v.haveTraverse == false) spawn_thread(v); } } WalkPath(root);

    Read the article

  • Parallelism in .NET – Part 15, Making Tasks Run: The TaskScheduler

    - by Reed
    In my introduction to the Task class, I specifically made mention that the Task class does not directly provide it’s own execution.  In addition, I made a strong point that the Task class itself is not directly related to threads or multithreading.  Rather, the Task class is used to implement our decomposition of tasks.  Once we’ve implemented our tasks, we need to execute them.  In the Task Parallel Library, the execution of Tasks is handled via an instance of the TaskScheduler class. The TaskScheduler class is an abstract class which provides a single function: it schedules the tasks and executes them within an appropriate context.  This class is the class which actually runs individual Task instances.  The .NET Framework provides two (internal) implementations of the TaskScheduler class. Since a Task, based on our decomposition, should be a self-contained piece of code, parallel execution makes sense when executing tasks.  The default implementation of the TaskScheduler class, and the one most often used, is based on the ThreadPool.  This can be retrieved via the TaskScheduler.Default property, and is, by default, what is used when we just start a Task instance with Task.Start(). Normally, when a Task is started by the default TaskScheduler, the task will be treated as a single work item, and run on a ThreadPool thread.  This pools tasks, and provides Task instances all of the advantages of the ThreadPool, including thread pooling for reduced resource usage, and an upper cap on the number of work items.  In addition, .NET 4 brings us a much improved thread pool, providing work stealing and reduced locking within the thread pool queues.  By using the default TaskScheduler, our Tasks are run asynchronously on the ThreadPool. There is one notable exception to my above statements when using the default TaskScheduler.  If a Task is created with the TaskCreationOptions set to TaskCreationOptions.LongRunning, the default TaskScheduler will generate a new thread for that Task, at least in the current implementation.  This is useful for Tasks which will persist for most of the lifetime of your application, since it prevents your Task from starving the ThreadPool of one of it’s work threads. The Task Parallel Library provides one other implementation of the TaskScheduler class.  In addition to providing a way to schedule tasks on the ThreadPool, the framework allows you to create a TaskScheduler which works within a specified SynchronizationContext.  This scheduler can be retrieved within a thread that provides a valid SynchronizationContext by calling the TaskScheduler.FromCurrentSynchronizationContext() method. This implementation of TaskScheduler is intended for use with user interface development.  Windows Forms and Windows Presentation Foundation both require any access to user interface controls to occur on the same thread that created the control.  For example, if you want to set the text within a Windows Forms TextBox, and you’re working on a background thread, that UI call must be marshaled back onto the UI thread.  The most common way this is handled depends on the framework being used.  In Windows Forms, Control.Invoke or Control.BeginInvoke is most often used.  In WPF, the equivelent calls are Dispatcher.Invoke or Dispatcher.BeginInvoke. As an example, say we’re working on a background thread, and we want to update a TextBlock in our user interface with a status label.  The code would typically look something like: // Within background thread work... string status = GetUpdatedStatus(); Dispatcher.BeginInvoke(DispatcherPriority.Normal, new Action( () => { statusLabel.Text = status; })); // Continue on in background method .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This works fine, but forces your method to take a dependency on WPF or Windows Forms.  There is an alternative option, however.  Both Windows Forms and WPF, when initialized, setup a SynchronizationContext in their thread, which is available on the UI thread via the SynchronizationContext.Current property.  This context is used by classes such as BackgroundWorker to marshal calls back onto the UI thread in a framework-agnostic manner. The Task Parallel Library provides the same functionality via the TaskScheduler.FromCurrentSynchronizationContext() method.  When setting up our Tasks, as long as we’re working on the UI thread, we can construct a TaskScheduler via: TaskScheduler uiScheduler = TaskScheduler.FromCurrentSynchronizationContext(); We then can use this scheduler on any thread to marshal data back onto the UI thread.  For example, our code above can then be rewritten as: string status = GetUpdatedStatus(); (new Task(() => { statusLabel.Text = status; })) .Start(uiScheduler); // Continue on in background method This is nice since it allows us to write code that isn’t tied to Windows Forms or WPF, but is still fully functional with those technologies.  I’ll discuss even more uses for the SynchronizationContext based TaskScheduler when I demonstrate task continuations, but even without continuations, this is a very useful construct. In addition to the two implementations provided by the Task Parallel Library, it is possible to implement your own TaskScheduler.  The ParallelExtensionsExtras project within the Samples for Parallel Programming provides nine sample TaskScheduler implementations.  These include schedulers which restrict the maximum number of concurrent tasks, run tasks on a single threaded apartment thread, use a new thread per task, and more.

    Read the article

  • Make your CHM Help Files show HTML5 and CSS3 content

    - by Rick Strahl
    The HTML Help 1.0 specification aka CHM files, is pretty old. In fact, it's practically ancient as it was introduced in 1997 when Internet Explorer 4 was introduced. Html Help 1.0 is basically a completely HTML based Help system that uses a Help Viewer that internally uses Internet Explorer to render the HTML Help content. Because of its use of the Internet Explorer shell for rendering there were many security issues in the past, which resulted in locking down of the Web Browser control in Windows and also the Help Engine which caused some unfortunate side effects. Even so, CHM continues to be a popular help format because it is very easy to produce content for it, using plain HTML and because it works with many Windows application platforms out of the box. While there have been various attempts to replace CHM help files CHM files still seem to be a popular choice for many applications to display their help systems. The biggest alternative these days is no system based help at all, but links to online documentation. For Windows apps though it's still very common to see CHM help files and there are still a ton of CHM help out there and lots of tools (including our own West Wind Html Help Builder) that produce output for CHM files as well as Web output. Image is Everything and you ain't got it! One problem with the CHM engine is that it's stuck with an ancient Internet Explorer version for rendering. For example if you have help content that uses HTML5 or CSS3 content you might have an HTML Help topic like the following shown here in a full Web Browser instance of Internet Explorer: The page clearly uses some CSS3 features like rounded corners and box shadows that are rendered using plain CSS 3 features. Note that I used Internet Explorer on purpose here to demonstrate that IE9 on Windows 7 can properly render this content using some of the new features of CSS, but the same is true for all other recent versions of the major browsers (FireFox 3.1+, Safari 4.5+, WebKit 9+ etc.). Unfortunately if you take this nice and simple CSS3 content and run it through the HTML Help compiler to produce a CHM file the resulting output on the same machine looks a bit less flashy: All the CSS3 styling is gone and although the page display and functionality still works, but all the extra styling features are gone. This even though I am running this on a Windows 7 machine that has IE9 that should be able to render these CSS features. Bummer. Web Browser Control - perpetually stuck in IE 7 Mode The problem is the Web Browser/Shell Components in Windows. This component is and has been part of Windows for as long as Internet Explorer has been around, but the Web Browser control hasn't kept up with the latest versions of IE. In a nutshell the control is stuck in IE7 rendering mode for engine compatibility reasons by default. However, there is at least one way to fix this explicitly using Registry keys on a per application basis. The key point from that blog article is that you can override the IE rendering engine for a particular executable by setting one (or more) registry flags that tell the Windows Shell which version of the Internet Explorer rendering engine to load. An application that wishes to use a more recent version of Internet Explorer can then register itself during installation for the specific IE version desired and from then on the application will use that version of the Web Browser component. If the application is older than the specified version it falls back to the default version (IE 7 rendering). Forcing CHM files to display with IE9 (or later) Rendering Knowing that we can force the IE usage for a given process it's also possible to affect the CHM rendering by setting same keys on the executable that's hosting the CHM file. What that executable file is depends on the type of application as there are a number of ways that can launch the help engine. hh.exeThe standalone Windows CHM Help Viewer that launches when you launch a CHM from Windows Explorer. You can manually add hh.exe to the registry keys. YourApplication.exeIf you're using .NET or any tool that internally uses the hhControl ActiveX control to launch help content your application is your host. You should add your application's exe to the registry during application startup. foxhhelp9.exeIf you're building a FoxPro application that uses the built-in help features, foxhhelp9.exe is used to actually host the help controls. Make sure to add this executable to the registry. What to set You can configure the Internet Explorer version used for an application in the registry by specifying the executable file name and a value that specifies the IE version desired. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit only or 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe 32 bit on 64 bit machine: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe Note that it's best to always set both values ideally when you install your application so it works regardless of which platform you run on. The value specified is a DWORD value and the interesting values are decimal 9000 for IE9 rendering mode depending on !DOCTYPE settings or 9999 for IE 9 standards mode always. You can use the same logic for 8000 and 8888 for IE8 and the final value of 7000 for IE7 (one has to wonder what they're going todo for version 10 to perpetuate that pattern). I think 9000 is the value you'd most likely want to use. 9000 means that IE9 will be used for rendering but unless the right doctypes are used (XHTML and HTML5 specifically) IE will still fall back into quirks mode as needed. This should allow existing pages to continue to use the fallback engine while new pages that have the proper HTML doctype set can take advantage of the newest features. Here's an example of how I set the registry keys in my Tarma Installmate registry configuration: Note that I set all three values both under the Software and Wow6432Node keys so that this works regardless of where these EXEs are launched from. Even though all apps are 32 bit apps, the 64 bit (the default one shown selected) key is often used. So, now once I've set the registry key for hh.exe I can now launch my CHM help file from Explorer and see the following CSS3 IE9 rendered display: Summary It sucks that we have to go through all these hoops to get what should be natural behavior for an application to support the latest features available on a system. But it shouldn't be a surprise - the Windows Help team (if there even is such a thing) has not been known for forward looking technologies. It's a pretty big hassle that we have to resort to setting registry keys in order to get the Web Browser control and the internal CHM engine to render itself properly but at least it's possible to make it work after all. Using this technique it's possible to ship an application with a help file and allow your CHM help to display with richer CSS markup and correct rendering using the stricter and more consistent XHTML or HTML5 doctypes. If you provide both Web help and in-application help (and why not if you're building from a single source) you now can side step the issue of your customers asking: Why does my help file look so much shittier than the online help… No more!© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  Help  Html Help Builder  Internet Explorer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Solaris 11 Launch Blog Carnival Roundup

    - by constant
    Solaris 11 is here! And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter go to eleven. Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure: Getting Started/Overview A lot of people speculated that the official launch of Solaris 11 would be on 11/11 (whatever way you want to turn it), but it actually happened two days earlier. Larry Wake himself offers 11 Reasons Why Oracle Solaris 11 11/11 Isn't Being Released on 11/11/11. Then, Larry goes on with a summary: Oracle Solaris 11: The First Cloud OS gives you a short and sweet rundown of what the major new features of Solaris 11 are. Jeff Victor has his own list of What's New in Oracle Solaris 11. A popular Solaris 11 meme is to write a blog post about 11 favourite features: Jim Laurent's 11 Reasons to Love Solaris 11, Darren Moffat's 11 Favourite Solaris 11 Features, Mike Gerdt's 11 of My Favourite Things! are just three examples of "11 Favourite Things..." type blog posts, I'm sure many more will follow... More official overview content for Solaris 11 is available from the Oracle Tech Network Solaris 11 Portal. Also, check out Rick Ramsey's blog post Solaris 11 Resources for System Administrators on the OTN Blog and his secret 5 Commands That Make Solaris Administration Easier post from the OTN Garage. (Automatic) Installation and the Image Packaging System (IPS) The brand new Image Packaging System (IPS) and the Automatic Installer (IPS), together with numerous other install/packaging/boot/patching features are among the most significant improvements in Solaris 11. But before installing, you may wonder whether Solaris 11 will support your particular set of hardware devices. Again, the OTN Garage comes to the rescue with Rick Ramsey's post How to Find Out Which Devices Are Supported By Solaris 11. Included is a useful guide to all the first steps to get your Solaris 11 system up and running. Tim Foster had a whole handful of blog posts lined up for the launch, teaching you everything you need to know about IPS but didn't dare to ask: The IPS System Repository, IPS Self-assembly - Part 1: Overlays and Part 2: Multiple Packages Delivering Configuration. Watch out for more IPS posts from Tim! If installing packages or upgrading your system from the net makes you uneasy, then you're not alone: Jim Laurent will tech you how Building a Solaris 11 Repository Without Network Connection will make your life easier. Many of you have already peeked into the future by installing Solaris 11 Express. If you're now wondering whether you can upgrade or whether a fresh install is necessary, then check out Alan Hargreaves's post Upgrading Solaris 11 Express b151a with support to Solaris 11. The trick is in upgrading your pkg(1M) first. Networking One of the first things to do after installing Solaris 11 (or any operating system for that matter), is to set it up for networking. Solaris 11 comes with the brand new "Network Auto-Magic" feature which can figure out everything by itself. For those cases where you want to exercise a little more control, Solaris 11 left a few people scratching their heads. Fortunately, Tschokko wrote up this cool blog post: Solaris 11 manual IPv4 & IPv6 configuration right after the launch ceremony. Thanks, Tschokko! And Milek points out a long awaited networking feature in Solaris 11 called Solaris 11 - hostmodel, which I know for a fact that many customers have looked forward to: How to "bind" a Solaris 11 system to a specific gateway for specific IP address it is using. Steffen Weiberle teaches us how to tune the Solaris 11 networking stack the proper way: ipadm(1M). No more fiddling with ndd(1M)! Check out his tutorial on Solaris 11 Network Tunables. And if you want to get even deeper into the networking stack, there's nothing better than DTrace. Alan Maguire teaches you in: DTracing TCP Congestion Control how to probe deeply into the Solaris 11 TCP/IP stack, the TCP congestion control part in particular. Don't miss his other DTrace and TCP related blog posts! DTrace And there we are: DTrace, the king of all observability tools. Long time DTrace veteran and co-author of The DTrace book*, Brendan Gregg blogged about Solaris 11 DTrace syscall provider changes. BTW, after you install Solaris 11, check out the DTrace toolkit which is installed by default in /usr/dtrace/DTT. It is chock full of handy DTrace scripts, many of which contributed by Brendan himself! Security Another big theme in Solaris 11, and one that is crucial for the success of any operating system in the Cloud is Security. Here are some notable posts in this category: Darren Moffat starts by showing us how to completely get rid of root: Completely Disabling Root Logins on Solaris 11. With no root user, there's one major entry point less to worry about. But that's only the start. In Immutable Zones on Encrypted ZFS, Darren shows us how to double the security of your services: First by locking them into the new Immutable Zones feature, then by encrypting their data using the new ZFS encryption feature. And if you're still missing sudo from your Linux days, Darren again has a solution: Password (PAM) caching for Solaris su - "a la sudo". If you're wondering how much compute power all this encryption will cost you, you're in luck: The Solaris X86 AESNI OpenSSL Engine will make sure you'll use your Intel's embedded crypto support to its fullest. And if you own a brand new SPARC T4 machine you're even luckier: It comes with its own SPARC T4 OpenSSL Engine. Dan Anderson's posts show how there really is now excuse not to encrypt any more... Developers Solaris 11 has a lot to offer to developers as well. Ali Bahrami has a series of blog posts that cover diverse developer topics: elffile: ELF Specific File Identification Utility, Using Stub Objects and The Stub Proto: Not Just For Stub Objects Anymore to name a few. BTW, if you're a developer and want to shape the future of Solaris 11, then Vijay Tatkar has a hint for you: Oracle (Sun Systems Group) is hiring! Desktop and Graphics Yes, Solaris 11 is a 100% server OS, but it can also offer a decent desktop environment, especially if you are a developer. Alan Coopersmith starts by discussing S11 X11: ye olde window system in today's new operating system, then Calum Benson shows us around What's new on the Solaris 11 Desktop. Even accessibility is a first-class citizen in the Solaris 11 user interface. Peter Korn celebrates: Accessible Oracle Solaris 11 - released! Performance Gone are the days of "Slowaris", when Solaris was among the few OSes that "did the right thing" while others cut corners just to win benchmarks. Today, Solaris continues doing the right thing, and it delivers the right performance at the same time. Need proof? Check out Brian's BestPerf blog with continuous updates from the benchmarking lab, including Recent Benchmarks Using Oracle Solaris 11! Send Me More Solaris 11 Launch Articles! These are just a few of the more interesting blog articles that came out around the Solaris 11 launch, I'm sure there are many more! Feel free to post a comment below if you find a particularly interesting blog post that hasn't been listed so far and share your enthusiasm for Solaris 11! *Affiliate link: Buy cool stuff and support this blog at no extra cost. We both win! var flattr_uid = '26528'; var flattr_tle = 'Solaris 11 Launch Blog Carnival Roundup'; var flattr_dsc = '<strong>Solaris 11 is here!</strong>And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter <a href="http://en.wikipedia.org/wiki/Up_to_eleven">go to eleven</a>.Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure:'; var flattr_tag = 'blogging,digest,Oracle,Solaris,solaris,solaris 11'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2011/11/solaris-11-launch-blog-carnival-roundup'; var flattr_lng = 'en_GB'

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • Psychonauts crashes right after entering load save door

    - by user67974
    Psychonauts crashes right after entering the 'Load Save' door. Here is the terminal output: Shader assembly time: 0.88 seconds Found OpenAL device: 'Simple Directmedia Layer' Found OpenAL device: 'ALSA Software' Found OpenAL device: 'OSS Software' Found OpenAL device: 'PulseAudio Software' Opened OpenAL Device: '(null)' ERROR: CAudioDrv::CAudioDrv->alGenSources reports AL_INVALID_VALUE error. PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonfx.isb' to 'WorkResource/Sounds/commonfx.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonvoice.isb' to 'WorkResource/Sounds/commonvoice.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmusic.isb' to 'WorkResource/Sounds/commonmusic.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmentalfx.isb' to 'WorkResource/Sounds/commonmentalfx.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmenfxmem.isb' to 'WorkResource/Sounds/commonmenfxmem.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonfxmem.isb' to 'WorkResource/Sounds/commonfxmem.isb' GameApp::StartUp InitSoundFiles() completed in 0.15 seconds GameApp::StartUp Load some common textures completed in 0.00 seconds WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb GameApp::StartUp InitEntities() completed in 0.02 seconds PSYCHONAUTS UNIX FILENAME: corrected 'WorkResource/SavedGames/savegameprefs.ini' to 'WorkResource/SAVEDGAMES/savegameprefs.ini' PSYCHONAUTS UNIX FILENAME: corrected 'WorkResource/SavedGames/savegameprefs.ini' to 'WorkResource/SAVEDGAMES/savegameprefs.ini' GameApp::StartUp m_pSaveLoadInterface->Startup() completed in 0.00 seconds GameApp::StartUp m_UserInterface.Setup() completed in 0.00 seconds STUBBED: multisample at EDisplayOptionsWidget (/home/icculus/projects/psychonauts/Source/game/luatest/Game/UIPCDisplayOptions.cpp:97) STUBBED: VK_* at CheckVirtualKey (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:1443) Game: Engine Running hook startup Game: Engine -> SetupGlobalObjects Game: Engine -> SetupLevelMenu Game: Engine -> InitMath GameApp::StartUp InitLua2() completed in 0.00 seconds GameApp::StartUp SetupLevelMenu() completed in 0.00 seconds STUBBED: do we even use this? at InitSocket (/home/icculus/projects/psychonauts/Source/game/luatest/Game/Gameplaylogger.cpp:210) GameApp::StartUp Post-Install total completed in 0.20 seconds Start Up completed in 1.57 seconds UnixMain: StartUp successful.. Working directory: /opt/psychonauts STUBBED: dispatch SDL events at PCMainHandleAnyWindowsMessages (/home/icculus/projects/psychonauts/Source/game/luatest/UnixMain.cpp:56) STUBBED: write me at GetJoystickInput (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:428) STUBBED: write me at GetJoystickActionValue (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:613) PSYCHONAUTS UNIX FILENAME: corrected 'workresource/cutScenes/prerendered/dflogo.bik' to 'WorkResource/cutscenes/prerendered/DFLogo.bik' Prerender subtitle file: workresource\cutScenes\prerendered\dflogo.dfs not found PSYCHONAUTS UNIX FILENAME: corrected 'workresource/cutScenes/prerendered/dflogo.bik' to 'WorkResource/cutscenes/prerendered/DFLogo.bik' STUBBED: fixed function pipeline? at setColorOp (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2097) STUBBED: fixed function pipeline? at setColorArg1 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2106) STUBBED: fixed function pipeline? at setColorArg2 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2115) STUBBED: fixed function pipeline? at setAlphaOp (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2124) STUBBED: fixed function pipeline? at setAlphaArg1 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2133) STUBBED: fixed function pipeline? at setAlphaArg2 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2142) STUBBED: fixed function pipeline? at setProjected (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2223) LOC WARN: Could not open Localization file 'Localization/English/_StringTable.lub' STUBBED: memory status at UpdateMemoryTracking (/home/icculus/projects/psychonauts/Source/game/luatest/Game/GameApp.cpp:4884) WARN: Couldn't resize array to 128; out-of-bounds elements are still in use: Vertex Pool, 188 Loading new level 'STMU' STUBBED: Need multithreaded GL at DisplayLoadingScreen (/home/icculus/projects/psychonauts/Source/game/luatest/Game/LoadingScreen.cpp:83) ========================= Memory post unload level ========================= ========================= LOC WARN: Could not open Localization file 'Localization/English/ST_StringTable.lub' DaveD: Info: Texture pack file contains 137 textures Doing a texture readback for locking! Game: Engine Saved[GLOBAL]: InstaHintFord_HostileRecord = [table] Game: Engine Saved[GLOBAL]: InstaHintFord_HostileOrder = [table] WARN: Redundant packfile read: anims\thought_bubble\bubblefirestarting.jan WARN: Redundant packfile read: anims\thought_bubble\bubbleintothemind.jan WARN: Redundant packfile read: anims\thought_bubble\bubbleinvisibility.jan WARN: Redundant packfile read: anims\thought_bubble\bubblepopperfill.jan WARN: Redundant packfile read: anims\thought_bubble\bubbletelekinesis.jan Initializing level script (if there is one) PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/stfx.isb' to 'WorkResource/Sounds/stfx.isb' Game: Engine Reloading goals: Game: Engine Saved[GLOBAL]: NextEncouragement = '/GLZF014TO/ 10' Game: Engine Saved[GLOBAL]: bUsedSalts = 0 Game: Engine Saved[GLOBAL]: bSTEntered = 1 Game: Engine Saved[GLOBAL]: memoriesST = 1 Game: Engine Saved[GLOBAL]: PsiBallColor = 'red' Game: Engine Saved[ST]: lastSubLevel = 'STMU' Game: Engine LOADING LEVEL st.STMU Game: Engine Saved[CA]: CALevelState = 1 Game: Engine Cutscene progression: CS Script moving from state nil to state nil, resultant state nil. Time: 0.124746672809124. * Stack Trace 1: (null) (line -1, file '(none)) () 2: SpawnScript (line -1, file 'C) (global) 3: onBeginLevel (line -1, file '(none)) (field) 4: (null) (line -1, file '(none)) () WARN: Cannot call GetDirectoryListing when running from the DVD Game: Engine Raz spawning at DartStart startpoint VM : LevelScript could not find script 'doorrimlight1' * Stack Trace 1: (null) (line -1, file '(none)) () WARN: (none(-1) SetEntityAlpha LevelScript: NULL script object passed Game: Engine Saved[GLOBAL]: bLoadedFromMainMenu = 1 Game: Engine Saved[GLOBAL]: NextEncouragement = '/GLZF014TO/ 10' Game: Engine Saved[GLOBAL]: NeedRankIncrement = 0 STUBBED: Need multithreaded GL at HideLoadingScreen (/home/icculus/projects/psychonauts/Source/game/luatest/Game/LoadingScreen.cpp:110) WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb Game: Engine Saved[GLOBAL]: SplineFigmentTVSizex = 4.51434326171875 Game: Engine Saved[GLOBAL]: SplineFigmentTVSizey = 46.38104248046875 Game: Engine Saved[GLOBAL]: SplineFigmentTVSizez = 47.08810424804688 WARN: (none(-1) SetNewAction LevelScript: no string passed ====================== Asset load progression ====================== Initial: 2.518 MB Vertex, 8.688 MB Texture Level : 3.719 MB Vertex, 22.535 MB Texture Scripts: 3.747 MB Vertex, 22.848 MB Texture ====================== ====================== Memory post level load ====================== ====================== WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb DaveD: Level loaded in 0.14 seconds Anim: anims\objects\tk_arrow_idle.jan: loaded (1 frames latency) Anim: anims\dartnew\helmet\darthelmetdn.jan: loaded (1 frames latency) Anim: anims\thought_bubble\shieldloop.jan: loaded (1 frames latency) Anim: anims\dartnew\standready.jan: loaded (1 frames latency) Anim: anims\dartnew\walkmove.jan: loaded (1 frames latency) Anim: anims\janitor\hint_end.jan: loaded (1 frames latency) Anim: anims\thought_bubble\ballstatic.jan: loaded (1 frames latency) Anim: anims\dartnew\actionfall.jan: loaded (1 frames latency) Anim: anims\dartnew\standstill.jan: loaded (1 frames latency) Anim: anims\dartnew\pack\packbounce_lf_rt.jan: loaded (1 frames latency) Anim: anims\dartnew\pack\packbounce_up_dn.jan: loaded (1 frames latency) Anim: anims\dartnew\helmet\darthelmetdefpose.jan: loaded (1 frames latency) 1: 1 (number) 1: 1 (number) STUBBED: This is probably wrong at GetDt (/home/icculus/projects/psychonauts/Source/CommonLibs/DFUtil/Profiler.cpp:181) STUBBED: set specular highlights at setSpecularEnable (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Renderer.cpp:2035) Anim: anims\dartnew\trnrtcycle.jan: loaded (1 frames latency) Anim: anims\dartnew\run.jan: loaded (1 frames latency) Anim: anims\dartnew\walk.jan: loaded (1 frames latency) Anim: anims\thought_bubble\bubbledoublejump.jan: loaded (1 frames latency) Anim: anims\dartnew\longjump.jan: loaded (1 frames latency) Anim: anims\menubrain\door1crack.jan: loaded (1 frames latency) Anim: anims\menubrain\door1crackedidle.jan: loaded (1 frames latency) Anim: anims\menubrain\door1closedidle.jan: loaded (1 frames latency) Anim: anims\dartnew\180.jan: loaded (1 frames latency) Anim: anims\menubrain\door3crack.jan: loaded (1 frames latency) Anim: anims\menubrain\door3crackedidle.jan: loaded (1 frames latency) Anim: anims\menubrain\door3closedidle.jan: loaded (1 frames latency) Anim: anims\dartnew\railslide45angle.jan: loaded (1 frames latency) Anim: anims\dartnew\railslideflat.jan: loaded (1 frames latency) Anim: anims\dartnew\trnlfcycle.jan: loaded (1 frames latency) WARN: (none(-1) SetNewAction LevelScript: no string passed Anim: anims\dartnew\mainmenu_jump.jan: loaded (1 frames latency) Anim: anims\menubrain\door1open.jan: loaded (1 frames latency) ERROR: Assert in /home/icculus/projects/psychonauts/Source/game/luatest/../../CommonLibs/Include/../DFGraphics/Color.h, line 96 v.x >= 0.0f && v.x <= 1.0f && v.y >= 0.0f && v.y <= 1.0f && v.z >= 0.0f && v.z <= 1.0f && v.w >= 0.0f && v.w <= 1.0f Encountered Error: Psychonauts has encountered an error /home/icculus/projects/psychonauts/Source/game/luatest/../../CommonLibs/Include/../DFGraphics/Color.h, line 96 v.x >= 0.0f && v.x <= 1.0f && v.y >= 0.0f && v.y <= 1.0f && v.z >= 0.0f && v.z <= 1.0f && v.w >= 0.0f && v.w <= 1.0f Please contact technical support at http://www.doublefine.com. I am currently using Bumblebee for hybrid graphics, if that helps in any way.

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • svnstat script

    - by Kyle Hodgson
    So I'm building out a shell script to check out all of our relevant svn repositories for analysis in svnstat. I've gotten all of this to work manually, now I'm writing up a bash script in cygwin on my Vista laptop, as I intend to move this to a Linux server at some point. Edit: I gave up on this and wrote a simple .bat script. I'll figure out the Linux deployment some other way. Edit: added the sleep 30 and svn log commands. I can tell now, with the svn log command, that it's not getting to the svn log ... this time, it did Applications, and ran the log, and then check out Database, and froze. I'll put the sleep 30 before and after the log this time. co2.sh #!/bin/bash function checkout { mkdir $1 svn checkout svn://dev-server/$1 $1 svn log --verbose --xml >> svn.log $1 sleep 30 } cd /cygdrive/c/Users/My\ User/Documents/Repos/wc checkout Applications checkout Database checkout WebServer/www.mysite.com checkout WebServer/anotherhost.mysite.com checkout WebServer/AnotherApp checkout WebServer/thirdhost.mysite.com checkout WebServer/fourthhost.mysite.com checkout WebServer/WebServices It works, for the most part - but for some reason it has a tendency to stop working after a few repositories, usually right after finishing a repository before going to the next one. When it fails, it will not recover on its own. I've tried commenting out the svn line, it goes in and creates all the directories just fine when I do that - so its not that. I'm looking for direction as well as direct advice. Cygwin has been very stable for me, but I did start using the native rxvt instead of "bash in a cmd.exe window" recently. I don't think that's the problem, as I've left top on remote systems running all night and rxvt didn't seem to mind. Also I haven't done any bash scripting in cygwin so I suppose this might not be recommended; though I can't see why not. I don't want all of WebServer, hence me only checking out certain folders like that. What I suspect is that something is hanging up the svn checkout. Any ideas here? Edit: this time when I hit ctrl+z to cancel out, I forgot I was on Windows and typed ps to see if the job was still running; and as you can see there are lots of svn processes hanging around... strange. Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ jobs [1]- Stopped bash co2.sh [2]+ Stopped ./co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ kill %1 [1]- Stopped bash co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ [1]- Terminated bash co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ ps PID PPID PGID WINPID TTY UID STIME COMMAND 7872 1 7872 2340 0 1000 Jun 29 /usr/bin/svn 7752 1 6140 7828 1 1000 Jun 29 /usr/bin/svn 6192 1 5044 2192 1 1000 Jun 30 /usr/bin/svn 7292 1 7452 1796 1 1000 Jun 30 /usr/bin/svn 6236 1 7304 7468 2 1000 Jul 2 /usr/bin/svn 1564 1 5032 7144 2 1000 Jul 2 /usr/bin/svn 9072 1 3960 6276 3 1000 Jul 3 /usr/bin/svn 5876 1 5876 5876 con 1000 11:22:10 /usr/bin/rxvt 924 5876 924 10192 4 1000 11:22:10 /usr/bin/bash 7212 1 7332 5584 4 1000 13:17:54 /usr/bin/svn 9412 1 5480 8840 4 1000 15:38:16 /usr/bin/svn S 8128 924 8128 9452 4 1000 17:38:05 /usr/bin/bash 9132 8128 8128 8172 4 1000 17:43:25 /usr/bin/svn 3512 1 3512 3512 con 1000 17:43:50 /usr/bin/rxvt I 10200 3512 10200 6616 5 1000 17:43:51 /usr/bin/bash 9732 1 9732 9732 con 1000 17:45:55 /usr/bin/rxvt 3148 9732 3148 8976 6 1000 17:45:55 /usr/bin/bash 5856 3148 5856 876 6 1000 17:51:00 /usr/bin/vim 7736 924 7736 8036 4 1000 17:53:26 /usr/bin/ps Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ jobs [2]+ Stopped ./co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Here's an strace on the PID of the hung svn program, it's been like this for hours. Looks like its just doing nothing. I keep suspecting that some interruption on the server is causing this; does svn have a locking mechanism I'm not aware of? Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ strace -p 7304 ********************************************** Program name: C:\cygwin\bin\svn.exe (pid 7304, ppid 6408) App version: 1005.25, api: 0.156 DLL version: 1005.25, api: 0.156 DLL build: 2008-06-12 19:34 OS version: Windows NT-6.0 Heap size: 402653184 Date/Time: 2009-07-06 18:20:11 **********************************************

    Read the article

  • Cannot connect to MySQL over TCP locally - Connection Timeout - Ubuntu 9.04

    - by gav
    I am running Ubuntu and am ultimately trying to connect Tomcat to my MySQL database using JDBC. It has worked previously but after a reboot the instance now fails to connect. Both Tomcat 6 and MySQL 5.0.75 are on the same machine Connection string: jdbc:mysql:///localhost:3306 I can connect to MySQL on the command line using the mysql command The my.cnf file is pretty standard (Available on request) has bind address: 127.0.0.1 I cannot Telnet to the MySQL port despite netstat saying MySQL is listening I have one IpTables rule to forward 80 - 8080 and no firewall I'm aware of. I'm pretty new to this and I'm not sure what else to test. I don't know whether I should be looking in etc/interfaces and if I did what to look for. It's weird because it used to work but after a reboot it's down so I must have changed something.... :). I realise a timeout indicates the server is not responding and I assume it's because the request isn't actually getting through. I installed MySQL via apt-get and Tomcat manually. MySqld processes root@88:/var/log/mysql# ps -ef | grep mysqld root 21753 1 0 May27 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe mysql 21792 21753 0 May27 ? 00:00:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock root 21793 21753 0 May27 ? 00:00:00 logger -p daemon.err -t mysqld_safe -i -t mysqld root 21888 13676 0 11:23 pts/1 00:00:00 grep mysqld Netstat root@88:/var/log/mysql# netstat -lnp | grep mysql tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 21792/mysqld unix 2 [ ACC ] STREAM LISTENING 1926205077 21792/mysqld /var/run/mysqld/mysqld.sock Toy Connection Class root@88:~# cat TestConnect/TestConnection.java import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; public class TestConnection { public static void main(String args[]) throws Exception { Connection con = null; try { Class.forName("com.mysql.jdbc.Driver").newInstance(); System.out.println("Got driver"); con = DriverManager.getConnection( "jdbc:mysql:///localhost:3306", "uname", "pass"); System.out.println("Got connection"); if(!con.isClosed()) System.out.println("Successfully connected to " + "MySQL server using TCP/IP..."); } finally { if(con != null) con.close(); } } } Toy Connection Class Output Note: This is the same error I get from Tomcat. root@88:~/TestConnect# java -cp mysql-connector-java-5.1.12-bin.jar:. TestConnection Got driver Exception in thread "main" com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 1 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.mysql.jdbc.Util.handleNewInstance(Util.java:409) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122) at TestConnection.main(TestConnection.java:14) Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.mysql.jdbc.Util.handleNewInstance(Util.java:409) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122) at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:344) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2181) ... 12 more Caused by: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) ... 13 more Telnet Output root@88:~/TestConnect# telnet localhost 3306 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection timed out

    Read the article

  • Why do we get a sudden spike in response times?

    - by Christian Hagelid
    We have an API that is implemented using ServiceStack which is hosted in IIS. While performing load testing of the API we discovered that the response times are good but that they deteriorate rapidly as soon as we hit about 3,500 concurrent users per server. We have two servers and when hitting them with 7,000 users the average response times sit below 500ms for all endpoints. The boxes are behind a load balancer so we get 3,500 concurrents per server. However as soon as we increase the number of total concurrent users we see a significant increase in response times. Increasing the concurrent users to 5,000 per server gives us an average response time per endpoint of around 7 seconds. The memory and CPU on the servers are quite low, both while the response times are good and when after they deteriorate. At peak with 10,000 concurrent users the CPU averages just below 50% and the RAM sits around 3-4 GB out of 16. This leaves us thinking that we are hitting some kind of limit somewhere. The below screenshot shows some key counters in perfmon during a load test with a total of 10,000 concurrent users. The highlighted counter is requests/second. To the right of the screenshot you can see the requests per second graph becoming really erratic. This is the main indicator for slow response times. As soon as we see this pattern we notice slow response times in the load test. How do we go about troubleshooting this performance issue? We are trying to identify if this is a coding issue or a configuration issue. Are there any settings in web.config or IIS that could explain this behaviour? The application pool is running .NET v4.0 and the IIS version is 7.5. The only change we have made from the default settings is to update the application pool Queue Length value from 1,000 to 5,000. We have also added the following config settings to the Aspnet.config file: <system.web> <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" /> </system.web> More details: The purpose of the API is to combine data from various external sources and return as JSON. It is currently using an InMemory cache implementation to cache individual external calls at the data layer. The first request to a resource will fetch all data required and any subsequent requests for the same resource will get results from the cache. We have a 'cache runner' that is implemented as a background process that updates the information in the cache at certain set intervals. We have added locking around the code that fetches data from the external resources. We have also implemented the services to fetch the data from the external sources in an asynchronous fashion so that the endpoint should only be as slow as the slowest external call (unless we have data in the cache of course). This is done using the System.Threading.Tasks.Task class. Could we be hitting a limitation in terms of number of threads available to the process?

    Read the article

  • CodePlex Daily Summary for Thursday, April 15, 2010

    CodePlex Daily Summary for Thursday, April 15, 2010New ProjectsApplication Logging Repository (ALR): The ALR is a light-weight logging framework that allows applications to log events and exceptions to a central repository.Arkane.FileProperties.DSS: Arkane.FileProperties.Dss is a library for parsing the file header of a .DSS file (as used by Olympus digital dictaphone systems) to obtain time, v...B in conTrol project: This project enables controling log-in and locking your workstation automatically, identifyng you bluetooth.DarkBook: DarkBook is a personal library project.Direct2D for Microsoft .Net: Direct2D, DirectWrite and Windows Imaging wrappers for .Net. This library allows to access Direct2D, DirectWrite and Windows Imaging Windows API f...DJ Ware: DJ Ware is an extensible music player with plugin support and innovative features to organize and explore music files. It is developed with C#, WPF...gpsMe: gpsMe is a Windows Mobile 6.x mapping solution allowing to place the user on a personnalized map. The screen requirements are VGA or WVGA but, you ...jErrorLog: jErrorLog is an error logging component for use in DotNet 2.0 or later applications. It can log error messages to any of the following: database, e...KEMET_API: Java Library (open - source). This library is a help to study egyptian hieroglyphs.Meadow: A web site project for a Swedish floorball team called Slackers. Home page built with ASP.NET 2.0, ASP.NET AJAX and SQL Server 2005.Mustang Math: Mustang Math makes it easier for young children to practice basic math facts on the computer. No keyboard or mouse required - just say the answer!...Net.Formats.oEmbed: oEmbed format implementation in c#. oEmbed is a format for allowing an embedded representation of a URL on third party sites. The simple API allows...Normlize O/R Mapper: Open source O/RM tool that participates with traditional inheritance object models as well as Hibernate/nHibernate style class shells. As I have t...N-Twill Twitter Client for VB.NET: Proyecto de cliente twitter hecho con la libreria TwitterVB2 y hecho en VB.net 2008.SIQM: Spatial Information Quality Management Toolset TIMETABLEASY Web: Under developmentTweetSharp: TweetSharp is a complete .NET library for micro-blogging platforms that allows you to write short and sweet expressions that fly to Twitter, Yammer...UISandbox: UISandbox is a sample C# source code showing how to deal with plugins requiring sandbox, when those plugins must interact with WPF application inte...WinForm SharePoint Web Part Manager: The SharePoint Web Part Manager is a WinForm tool using the SharePoint object model that enables developers and power users to add, update, delete,...WoW Character Viewer: View your World of Warcraft character (or anyone else's character), using this application. Written using Visual Basic Express 2008, then ported t...Xrns2XMod: Xrns2XMod converts from Renoise format (xrns) to mod or xm, which are more compatible formats playable from xmplay or vlc.New ReleasesArkane.FileProperties.DSS: 1.0 stable release: Executables and merge module for 1.0. (See documentation.)Bluetooth Radar: Version 2.0: Add IrDA reference for Bluetooth sending using Obex Add Project icon Add Bluetooth detection mode (Auto close application is there is no blueto...BUtil: BUtil 5.0 Alpha: Backup tasks adding.... in progressChronos WPF: Chronos v1.0 RC 1: Chronos v1.0 RC 1. Development will be feature frozen after this release, only bug fixes will be allowed. Updated nRoute assembly to v0.4 (http:...clipShow: Version 2.5: Release that addresses the canonical syntax issues in search discoverd by Tschachim (thanks again!). Also, the play list and play all menu items s...DarkBook: DarkBook alpha: Hi, here comes the alpha version of Darkbook. It has all the functions already but is still in developing. I hope it's helpful for you, at least it...DirectQ: Release 1.8.3a: Improvements to 1.8.2, which will be shortly be removed. This replaces the original 1.8.3 release from earlier today which had some late-breaking ...Effect Custom Tool for Visual Studio: Effect Custom Tool v1.1: Effect Custom Tool for Visual Studio is a visual studio 2008 extension that helps you generate c# classes from effect (*.fx) files for use with Xna...Folder Bookmarks: Folder Bookmarks 1.4.3: This is the latest version of Folder Bookmarks (1.4.3), with general improvements. It has an installer - it will create a directory 'CPascoe' in My...gpsMe: gpsMe v0.3: Required Hardware Windows Mobile 6 .Net Compact Framework 3.5 integrated gps device VGA or WVGA screen (normally works on others)IST435: Lab 4 - Enterprise Level CMS with DotNetNuke: Lab 4 - Enterprise Level CMS with DotNetNukeThis is the "starter kit" that you must base your Lab 4 on. This lab must be completed in-class.Mouse Jiggler: MouseJiggle-1.1: 1.1 release of Mouse Jiggler, now with x64 compatibility and the ability to start jiggling on run with the --jiggle or -j command-line switch.Mustang Math: MustangMath.exe: This is a quick and dirty "0.1" prototype to demonstrate the speech recognition idea. It starts asking you questions automatically on launch and k...MvcContrib: a Codeplex Foundation project: 2.0.36.0 for MVC2 (RTW): Please see the Change Log for a complete list of changes. MVC BootCamp Description of the releases: MvcContrib.Release.zip MvcContrib.dll MvcC...Nito.LINQ: Beta (v0.3): New features for this release: Several new supported platforms (see below). PDBs that are source-indexed to the appropriate CodePlex changeset. ...OpenIdPortableArea: 0.1.0.2 OpenIdPortableArea: OpenIdPortableArea.Release: DotNetOpenAuth.dll DotNetOpenAuth.xml MvcContrib.dll MvcContrib.xml OpenIdPortableArea.dll OpenIdPortableAre...PokeIn Comet Ajax Library: PokeIn Sample with Library v0.2: New version of PokeIn library with sample. v0.2 There are new features in this release and no bug detected yet.Project Tru Tiên: Elements-test V1-fix (v2): Là EL test được fix tiếp theo bản fix V1, tạm gọi đây là bản fix V2 của ELtest Trong bản fix này EL được fix thêm vụ Quest, Quest chỉnh sửa đúng t...Rule 18 - Love your clipboard: Rule 18: This is the third public beta for the first version of Rule 18. This version has been updated to support Visual Studio 2010 RTM and .NET 4.0 RTM. ...SevenZipSharp: SevenZipSharp 0.62: Added: Extraction from SFX archives. Now it is possible to unrar RAR self-extractors, unzip ZIP self-extractors, etc. Extraction from DOC, XLS, (...SharePoint Labs: SPLab3001A-FRA-Level200: SPLab3001A-FRA-Level200 This SharePoint Lab will teach the persistence object layer that SharePoint uses to centraly store configuration data and o...TTXPathNavigator: TTXPathNavigator for VS2010: Version for Visual Studio 2010turing machine simulator: SDS: SDS documentVecDraw: VecDraw_0.2.25: Alpha release for test purposesWinForm SharePoint Web Part Manager: Beta 1: First release of the WinForm SharePoitn web part manager toolXrns2XMod: Xrns2XMod 0.5.1: Mod and XM conversion format - No sample data conversion at momentZip Solution: ZipSolution 5.3: Features: 1. Added WaitMsec for visual studio support with getting access to files in post build event; 2. Added ShowTextInToolbars to app.config ...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelpatterns & practices – Enterprise LibraryMost Active ProjectsRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationFarseer Physics EngineIonics Isapi Rewrite FilterNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETjQuery Library for SharePoint Web ServicesDotRasFacebook Developer Toolkit

    Read the article

  • Network authentication + roaming home directory - which technology should I look into using?

    - by Brian
    I'm looking into software which provides a user with a single identity across multiple computers. That is, a user should have the same permissions on each computer, and the user should have access to all of his or her files (roaming home directory) on each computer. There seem to be many solutions for this general idea, but I'm trying to determine the best one for me. Here are some details along with requirements: The network of machines are Amazon EC2 instances running Ubuntu. We access the machines with SSH. Some machines on this LAN may have different uses, but I am only discussing machines for a certain use (running a multi-tenancy platform). The system will not necessarily have a constant amount of machines. We may have to permanently or temporarily alter the amount of machines running. This is the the reason why I'm looking into centralized authentication/storage. The implementation of this effect should be a secure one. We're unsure if users will have direct shell access, but their software will potentially be running (under restricted Linux user names, of course) on our systems, which is as good as direct shell access. Let's assume that their software could potentially be malicious for the sake of security. I have heard of several technologies/combinations to achieve my goal, but I'm unsure of the ramifications of each. An older ServerFault post recommended NFS & NIS, though the combination has security problems according to this old article by Symantec. The article suggests moving to NIS+, but, as it is old, this Wikipedia article has cited statements suggesting a trending away from NIS+ by Sun. The recommended replacement is another thing I have heard of... LDAP. It looks like LDAP can be used to save user information in a centralized location on a network. NFS would still need to be used to cover the 'roaming home folder' requirement, but I see references of them being used together. Since the Symantec article pointed out security problems in both NIS and NFS, is there software to replace NFS, or should I heed that article's suggestions for locking it down? I'm tending toward LDAP because another fundamental piece of our architecture, RabbitMQ, has a authentication/authorization plugin for LDAP. RabbitMQ will be accessible in a restricted manner to users on the system, so I would like to tie the security systems together if possible. Kerberos is another secure authentication protocol that I have heard of. I learned a bit about it some years ago in a cryptography class but don't remember much about it. I have seen suggestions online that it can be combined with LDAP in several ways. Is this necessary? What are the security risks of LDAP without Kerberos? I also remember Kerberos being used in another piece of software developed by Carnegie Mellon University... Andrew File System, or AFS. OpenAFS is available for use, though its setup seems a bit complicated. At my university, AFS provides both requirements... I can log in to any machine, and my "AFS folder" is always available (at least when I acquire an AFS token). Along with suggestions for which path I should look into, does anybody have any guides which were particularly helpful? As the bold text pointed out, LDAP looks to be the best choice, but I'm particularly interested in the implementation details (Keberos? NFS?) with respect to security.

    Read the article

  • ERROR 2003 (HY000): Can't connect to MySQL server on (111)

    - by JohnMerlino
    I am unable to connect to on my ubuntu installation a remote tcp/ip which contains a mysql installation: viggy@ubuntu:~$ mysql -u user.name -p -h xxx.xxx.xxx.xxx -P 3306 Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on 'xxx.xxx.xxx.xxx' (111) I commented out the line below using vim in /etc/mysql/my.cnf: # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 Then I restarted the server: sudo service mysql restart But still I get the same error. This is the content of my.cnf: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf. # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ (Note that I can log into my local mysql install just fine by running mysql (and it will log me in as root) and also note that I can get into mysql in the remote server by logging into via ssh and then invoking mysql), but I am unable to connect to the remote server via my terminal using the host, and I need to do it that way so that I can then use mysql workbench.

    Read the article

  • MySQL my.cnf file not being read, Ubuntu 10.04 64bit

    - by reallyordinary
    I've been researching this for a few hours with no luck. Basically it looks like my server's my.cnf file isn't being read at all. I've searched my server, and there's only one my.cnf file on it, located at /etc/mysql/my.cnf. Its ownership is root:root. I'm running Ubuntu 10.04 64bit on a Linode.com server. I have the latest versions of MySQL and PHP installed. I've edited the my.cnf file, commented out "skip-innodb", and have set innodb to be the default storage engine using default-storage-engine = innodb And then restarted mysql. But when I do show engines, MyISAM is still coming up as the default engine. Also - none of the innodb settings I've added to the my.cnf file are being read. For example, I have this in my.cnf: innodb_buffer_pool_size=4G But in phpmyadmin, InnoDB is showing as having a buffer pool size of 8,192 KiB. Similarly, I have this in the my.cnf: innodb_data-file_path = ibdata1:500M:autoextend But in phpmyadmin, it's reading as ibdata1:10M:autoextend. It doesn't look like MyISAM info is being read from the my.cnf file either. The my.cnf file has skip-external-locking queried out, but it's showing as "on" in phpmyadmin. So - yeah, it looks like nothing in the my.cnf file is being read at all. But the server still works. I'm running a Drupal site on it and it seems to operate fine. So mysql seems to be drawing default settings from... some mysterious secret location. Any idea how I can make mysql see and use this my.cnf file? Actually, wait - it looks like it may be being read, not sure. I checked the error.log and found this: 101128 4:28:52 [ERROR] Cannot find or open table databasename/cache_apachesolr from the internal data dictionary of InnoDB though the .frm file for the table exists. Maybe you have deleted and recreated InnoDB data files but have forgotten to delete the corresponding .frm files of InnoDB tables, or you have moved .frm files to another database? or, the table contains indexes that this version of the engine doesn't support. See http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html how you can resolve the problem. InnoDB: Error: auto-extending data file ./ibdata1 is of a different size InnoDB: 640 pages (rounded down to MB) than specified in the .cnf file: InnoDB: initial 32000 pages, max 0 (relevant if non-zero) pages! InnoDB: Could not open or create data files. InnoDB: If you tried to add new data files, and it failed here, InnoDB: you should now edit innodb_data_file_path in my.cnf back InnoDB: to what it was, and remove the new ibdata files InnoDB created InnoDB: in this failed attempt. InnoDB only wrote those files full of InnoDB: zeros, but did not yet use them in any way. But be careful: do not InnoDB: remove old data files which contain your precious data! 101128 4:28:52 [ERROR] Plugin 'InnoDB' init function returned error. 101128 4:28:52 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 101128 4:28:52 [ERROR] /usr/sbin/mysqld: unknown variable 'innodb_lock_wait_timout=50' 101128 4:28:52 [ERROR] Aborting 101128 4:28:52 [Note] /usr/sbin/mysqld: Shutdown complete

    Read the article

  • How I do VCS

    - by Wes McClure
    After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts.  To start this out, I want to talk about how I use VCS in a team environment.  These come in a series of tips or best practices that I try to follow.  Note: This list is subject to change in the future. Always use some form of version control for all aspects of software development. Development is an evolution.  Looking back at where we were is an invaluable asset in that process.  This includes data schemas and documentation. Reverting / reapplying changes is absolutely critical for efficient development. The tools I use: Code: Hg (preferred), SVN Database: TSqlMigrations Documents: Sometimes in code repository, also SharePoint with versioning Always tag a commit (changeset) with comments This is a quick way to describe to someone else (or your future self) what the changeset entails. Be brief but courteous. One or two sentences about the task, not the actual changes. Use precommit hooks or setup the central repository to reject changes without comments. Link changesets to documentation If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit.  This helps locate more information about the commit and/or related changesets. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget. Ability to work offline is required, including commits and history Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS.  I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository. Never lock resources (files) in a central repository… Rude! We have merge tools for a reason, merging sucked a long time ago, it doesn’t anymore… stop locking files! This is unproductive, rude and annoying to other team members. Always review everything in your commit. Never ever commit a set of files without reviewing the changes in each. Never add a file without asking yourself, deep down inside, does this belong? If you leave to make changes during a review, start the review over when you come back.  Never assume you didn’t touch a file, double check. This is another reason why you want to avoid large, infrequent commits. Requirements for tools Quickly show pending changes for the entire repository. Default action for a resource with pending changes is a diff. Pluggable diff & merge tool Produce a unified diff or a diff of all changes.  This is helpful to bulk review changes instead of opening each file. The central repository is not your own personal dump yard.  Breaking this rule is a sure fire way to get the F bomb dropped in front of your name, multiple times. If you turn on Visual Studio’s commit on closing studio option, I will personally break your fingers. By the way, the person(s) in charge of this feature should be fired and never be allowed near programming, ever again. Commit (integrate) to the central repository / branch frequently I try to do this before leaving each day, especially without a DVCS.  One never knows when they might need to work from remote the following day. Never commit commented out code If it isn’t needed anymore, delete it! If you aren’t sure if it might be useful in the future, delete it! This is why we have history. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it. Don’t commit build artifacts, user preferences and temporary files. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin\*, obj\*, *.dll, *.exe) User preferences are your settings, stop overriding my preferences files! (ie: *.suo and *.user files) Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file.  Set this up as a first step when creating a new repository! Be polite when merging unresolved conflicts. Count to 10, cuss, grab a stress ball and realize it’s not a big deal.  Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them. Following the other rules, especially committing frequently, will reduce the likelihood of this. Suck it up, we all have to deal with this unintended consequence at times.  Just be careful and GET FAMILIAR with your merge tool.  It’s really not as scary as you think.  I personally prefer KDiff3 as its merging capabilities rock. Don’t blindly merge and then blindly commit your changes, this is rude and unprofessional.  Make sure you understand why the conflict occurred and which parts of the code you want to keep.  Apply scrutiny when you commit a manual merge: review the diff! Make sure you test the changes (build and run automated tests) Become intimate with your version control system and the tools you use with it. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc.  Create test repositories and walk through common scenarios. Find the most efficient way to do your work.  These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI. I like a combination of both Tortoise Hg and hg cli to get the job efficiently. Always tag releases Create a way to find a given release, whether this be in comments or an explicit tag / branch.  This should be readily discoverable. Create release branches to patch bugs and then merge the changes back to other development branch(es). If using feature branches, strive for periodic integrations. Feature branches often cause forked code that becomes irreconcilable.  Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into.  This will avoid merge conflicts in the future. Feature branches are best when they are mutually exclusive of active development in other branches. Use and abuse local commits , at least one per task in a story. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete. Never commit a broken build or failing tests to the central repository. It’s ok for a local commit to break the build and/or tests.  In fact, I encourage this if it helps group the changes more logically.  This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes). If you have more than a dozen outstanding changed resources, there should probably be more than one commit involved. Exceptions when maintaining code bases that require shotgun surgery, in this case, it’s a design smell :) Don’t version sensitive information Especially usernames / passwords   There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code.  I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries.  Please feel free to share your ideas about this below.    -Wes

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89  | Next Page >