Search Results

Search found 9744 results on 390 pages for 'k means'.

Page 242/390 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • How to Send the Contents of the Clipboard to a Text File via the Send to Menu

    - by Jason Faulkner
    We have previously covered how to send the contents of a text file to the Windows Clipboard with a simple Send To shortcut, but what if you want to do the opposite? That is: send the contents of the clipboard to a text file with a simple shortcut. No problem. Here’s how. Copy the ClipOut Utility While Windows offers the command line tool ‘clip’ as a way to direct console output to the clipboard, it does not have a tool to direct the clipboard contents to the console. To do this, we are going to use a small utility named ClipOut (download link at the bottom). Simply download and extract this file to a location in your Windows PATH variable (if you don’t know what this means, just extract the EXE to your C:\Windows folder) and you are ready to go. Add the Send To Shortcut Open your Send To folder location by going to Run > shell:sendto Create a new shortcut with the command: CMD /C ClipOut > Note the above command will overwrite the contents of the selected file. If you would like to append to the contents of the selected file, use this command instead: CMD /C ClipOut >> Of course, you could make shortcuts for both. Give a descriptive name to the shortcut. You’re finished. Using this shortcut will now send the text contents copied to your Windows Clipboard to the selected file. It is important to note that the ClipOut tool only supports outputting text. If you had binary data copied to your clipboard, then the output would be empty. Changing the Icon By default, the icon for the shortcut will appear as a command prompt, but you can easily change this by editing the properties of the shortcut and clicking the Change Icon button. We used an icon located in “%SystemRoot%\System32\shell32.dll”, but any icon of your liking will do. As an additional tweak, you can set the properties of the shortcut to run minimized. This will prevent the command window from “blinking” when the send to command is run (instead it will blink in your taskbar, which is hardly noticeable). Links Download ClipOut Utility     

    Read the article

  • SMTP POP3 & PST. Acronyms from Hades.

    - by mikef
    A busy SysAdmin will occasionally have reason to curse SMTP. It is, certainly, one of the strangest events in the history of IT that such a deeply flawed system, designed originally purely for campus use, should have reached its current dominant position. The explanation was that it was the first open-standard email system, so SMTP/POP3 became the internet standard. We are, in consequence, dogged with a system with security weaknesses so extreme that messages are sent in plain text and you have no real assurance as to who the message came from anyway (SMTP-AUTH hasn't really caught on). Even without the security issues, the use of SMTP in an office environment provides a management nightmare to all commercial users responsible for complying with all regulations that control the conduct of business: such as tracking, retaining, and recording company documents. SMTP mail developed from various Unix-based systems designed for campus use that took the mail analogy so literally that mail messages were actually delivered to the users, using a 'store and forward' mechanism. This meant that, from the start, the end user had to store, manage and delete messages. This is a problem that has passed through all the releases of MS Outlook: It has to be able to manage mail locally in the dreaded PST file. As a stand-alone system, Outlook is flawed by its neglect of any means of automatic backup. Previous Outlook PST files actually blew up without warning when they reached the 2 Gig limit and became corrupted and inaccessible, leading to a thriving industry of 3rd party tools to clear up the mess. Microsoft Exchange is, of course, a server-based system. Emails are less likely to be lost in such a system if it is properly run. However, there is nothing to stop users from using local PSTs as well. There is the additional temptation to load emails into mobile devices, or USB keys for off-line working. The result is that the System Administrator is faced by a complex hybrid system where backups have to be taken from Servers, and PCs scattered around the network, where duplication of emails causes storage issues, and document retention policies become impossible to manage. If one adds to that the complexity of mobile phone email readers and mail synchronization, the problem is daunting. It is hardly surprising that the mood darkens when SysAdmins meet and discuss PST Hell. If you were promoted to the task of tormenting the souls of the damned in Hades, what aspects of the management of Outlook would you find most useful for your task? I'd love to hear from you. Cheers, Michael

    Read the article

  • Create simple jQuery plugin

    - by ybbest
    In the last post, I have shown you how to add the function to jQuery. In this post, I will show you how to create plugin to achieve this. 1. You need to wrap your code in the following construct, this is because you should not use $ directly as $ is global variable, it could have clash with some other library which also use $.Basically, you can pass in jQuery object into the function, so that $ is made available inside the function. (JavaScript use function to create scope, so you can make sure $ is referred to jQuery inside the function ) (function($){ //Your code goes here. }; })(jQuery); 2. Put your code into the construct above. (function ($) { $.getParameterByName = function (name) { name = name.replace(/[\[]/, "\\\[").replace(/[\]]/, "\\\]"); var regexS = "[\\?&]" + name + "=([^&#]*)"; var regex = new RegExp(regexS); var results = regex.exec(window.location.search); if (results == null) return ""; else return decodeURIComponent(results[1].replace(/\+/g, " ")); }; })(jQuery); 3. Now you can reference the code into you project and you can call the method in you JavaScript References: Provides scope for variables Variables are scoped at the function level in javascript. This is different to what you might be used to in a language like C# or Java where the variables are scoped to the block. What this means is if you declare a variable inside a loop or an if statement, it will be available to the entire function. If you ever find yourself needing to explicitly scope a variable inside a function you can use an anonymous function to do this. You can actually create an anonymous function and then execute it straight away and all the variables inside will be scoped to the anonymous function: (function() { var myProperty = "hello world"; alert(myProperty); })(); alert(typeof(myProperty)); // undefined How does an anonymous function in JavaScript work? Building Your First jQuery Plugin A Plugin Development Pattern

    Read the article

  • Storing hierarchical template into a database

    - by pduersteler
    If this title is ambiguous, feel free to change it, I don't know how to put this in a one-liner. Example: Let's assume you have a html template which contains some custom tags, like <text_field />. We now create a page based on a template containing more of those custom tags. When a user wants to edit the page, he sees a text field. he can input things and save it. This looks fairly easy to set up. You either have something like a template_positions table which stores the content of those fields. Case: I now have a bit of a blockade keeping things as simple as possible. Assume you have the same tag given in your example, and additionally, <layout> and <repeat> tags. Here's an example how they should be used: <repeat> <layout name="image-left"> <image /> <text_field /> </layout> <layout name="image-right"> <text_field /> <image /> </layout> </repeat> We now have a block which can be repeated, obviously. This means: when creting/editing a page containing such a template block, I can choose between a layout image-left and image-right which then gets inserted as content element (where content for <image /> and <text_field /> gets stored). And because this is inside a <repeat>, content elements from the given layouts can be inserted multiple times. How do you store this? Simply said, this could be stored with the same setup I've wrote in the example above, I just need to add a parent_id or something similiar to maintain a hierarchy. but I think I am missing something. At least the relation between an inserted content element and the origin/insertion point is missing. And what happens when I update the template file? Do I have to give every custom tag that acts as editable part of a template an identifier that matches an identifier in the template to substitue them correctly? Or can you think of a clean solution that might be better?

    Read the article

  • Enum types, FlagsAttribute & Zero value – Part 2

    - by nmgomes
    In my previous post I wrote about why you should pay attention when using enum value Zero. After reading that post you are probably thinking like Benjamin Roux: Why don’t you start the enum values at 0x1? Well I could, but doing that I lose the ability to have Sync and Async mutually exclusive by design. Take a look at the following enum types: [Flags] public enum OperationMode1 { Async = 0x1, Sync = 0x2, Parent = 0x4 } [Flags] public enum OperationMode2 { Async = 0x0, Sync = 0x1, Parent = 0x2 } To achieve mutually exclusion between Sync and Async values using OperationMode1 you would have to operate both values: protected void CheckMainOperarionMode(OperationMode1 mode) { switch (mode) { case (OperationMode1.Async | OperationMode1.Sync | OperationMode1.Parent): case (OperationMode1.Async | OperationMode1.Sync): throw new InvalidOperationException("Cannot be Sync and Async simultaneous"); break; case (OperationMode1.Async | OperationMode1.Parent): case (OperationMode1.Async): break; case (OperationMode1.Sync | OperationMode1.Parent): case (OperationMode1.Sync): break; default: throw new InvalidOperationException("No default mode specified"); } } but this is a by design constraint in OperationMode2. Why? Simply because 0x0 is the neutral element for the bitwise OR operation. Knowing this singularity, replacing and simplifying the previous method, you get: protected void CheckMainOperarionMode(OperationMode2 mode) { switch (mode) { case (OperationMode2.Sync | OperationMode2.Parent): case (OperationMode2.Sync): break; case (OperationMode2.Parent): default: break; } This means that: if both Sync and Async values are specified Sync value always win (Zero is the neutral element for bitwise OR operation) if no Sync value specified, the Async method is used. Here is the final method implementation: protected void CheckMainOperarionMode(OperationMode2 mode) { if (mode & OperationMode2.Sync == OperationMode2.Sync) { } else { } } All content above prove that Async value (0x0) is useless from the arithmetic perspective, but, without it we lose readability. The following IF statements are logically equals but the first is definitely more readable: if (OperationMode2.Async | OperationMode2.Parent) { } if (OperationMode2.Parent) { } Here’s another example where you can see the benefits of 0x0 value, the default value can be used explicitly. <my:Control runat="server" Mode="Async,Parent"> <my:Control runat="server" Mode="Parent">

    Read the article

  • Collision detection doesn't work for automated elements in XNA 4.0

    - by NDraskovic
    I have a really weird problem. I made a 3D simulator of an "assembly line" as a part of a college project. Among other things it needs to be able to detect when a box object passes in front of sensor. I tried to solve this by making a model of a laser and checking if the box collides with it. I had some problems with BoundingSpheres of models meshes so I simply create a BoundingSphere and place it in the same place as the model. I organized them into a list of BoundingSpheres called "spheres" and for each model I create one BoundingSphere. All models except the box are static, so the box object has its own BoundingSphere (not a member of the "spheres" list). I also implemented a picking algorithm that I use to start the movement. This is the code that checks for collision: if (spheres.Count != 0) { for (int i = 1; i < spheres.Count; i++) { if (spheres[i].Intersects(PickingRay) != null && Microsoft.Xna.Framework.Input.ButtonState.Pressed == Mouse.GetState().LeftButton) { start = true; break; } if (BoxSphere.Intersects(spheres[i]) && start) { MoveBox(0, false);//The MoveBox function receives the direction (0) and a bool value that dictates whether the box should move or not (false means stop) start = false; break; } if (start /*&& Microsoft.Xna.Framework.Input.ButtonState.Pressed == Mouse.GetState().LeftButton*/ && !BoxSphere.Intersects(spheres[i])) { MoveBox(0, true); break; } } The problem is this: When I use the mouse to move the box (the commented part in the third if condition) the collision works fine (I have another part of code that I removed to simplify my question - it calculates the "address" of the box, and by that number I know that the collision is correct). But when I comment it (like in this example) the box just passes trough the lasers and does not detect the collision (the idea is that the box stops at each laser and the user passes it forth by clicking on the appropriate "switch"). Can you see the problem? Please help, and if you need more informations I will try to give them. Thanks

    Read the article

  • Capture a Query Executed By An Application Or User Against a SQL Server Database in Less Than a Minute

    - by Compudicted
    At times a Database Administrator, or even a developer is required to wear a spy’s hat. This necessity oftentimes is dictated by a need to take a glimpse into a black-box application for reasons varying from a performance issue to an unauthorized access to data or resources, or as in my most recent case, a closed source custom application that was abandoned by a deserted contractor without source code. It may not be news or unknown to most IT people that SQL Server has always provided means of back-door access to everything connecting to its database. This indispensible tool is SQL Server Profiler. This “gem” is always quietly sitting in the Start – Programs – SQL Server <product version> – Performance Tools folder (yes, it is for performance analysis mostly, but not limited to) ready to help you! So, to the action, let’s start it up. Once ready click on the File – New Trace button, or using Ctrl-N with your keyboard. The standard connection dialog you have seen in SSMS comes up where you connect the standard way: One side note here, you will be able to connect only if your account belongs to the sysadmin or alter trace fixed server role. Upon a successful connection you must be able to see this initial dialog: At this stage I will give a hint: you will have a wide variety of predefined templates: But to shorten your time to results you would need to opt for using the TSQL_Grouped template. Now you need to set it up. In some cases, you will know the principal’s login name (account) that needs to be monitored in advance, and in some (like in mine), you will not. But it is VERY helpful to monitor just a particular account to minimize the amount of results returned. So if you know it you can already go to the Event Section tab, then click the Column Filters button which would bring a dialog below where you key in the account being monitored without any mask (or whildcard):  If you do not know the principal name then you will need to poke around and look around for things like a config file where (typically!) the connection string is fully exposed. That was the case in my situation, an application had an app.config (XML) file with the connection string in it not encrypted: This made my endeavor very easy. So after I entered the account to monitor I clicked on Run button and also started my black-box application. Voilà, in a under a minute of time I had the SQL statement captured:

    Read the article

  • Code Design question, circular reference across classes?

    - by dsollen
    I have no code here, as this is more of a design question (I assume this is still the best place to ask it). I have a very simple server in java which stores a mapping between certain values and UUID which are to be used by many systems across multiple platforms. It accepts a connection from a client and creates a clientSocket which stores the socket and all the other relevant data unique to that connection. Each clientSocket will run in their own thread and will block on the socket waiting for a read. I expect very little strain on this system, it will rarely get called, but when it does get a call it will need to respond quickly and due to the risk of it having a peak time with multiple calls coming in at once threaded is still better. Each thread has a reference to a Mapper class which stores the mapping of UUID which it's reporting to others (with proper synchronization of course). This all works until I have to add a new UUID to the list. When this happens I want to report to all clients that care about that particular UUID that a new one was added. I can't multicast (limitation of the system I'm running on) so I'm having each socket send the message to the client through the established socket. However, since each thread only knows about the socket it's waiting on I didn't have a clear method of looking up every thread/socket that cares about the data to inform them of the new UUID. Polling is out mostly because it seems a little too convoluted to try to maintain a list of newly added UUID. My solution as of now is to have the 'parent' class which creates the mapper class and spawns all the threads pass itself as an argument to the mapper. Then when the mapper creates a new UUID it can make a call to the parent class telling it to send out updates to all the other sockets that care about the change. I'm concerned that this may be a bad design due to the use of a circular reference; parent has a reference to mapper (to pass it to new ClientSocket threads) and mapper points to parent. It doesn't really feel like a bad design to me but I wanted to check since circular references are suppose to be bad. Note: I realize this means that the thread associated with whatever socket originally received the request that spawned the creation of a UUID is going to pay the 'cost' of outputting to all the other clients that care about the new UUID. I don't care about this; as I said I suspect the client to receive only intermittent messages. It's unlikely for one socket to receive multiple messages at one time, and there won't be that many sockets so it shouldn't take too long to send messages to each of them. Perhaps later I'll fix the fact that I'm saddling higher work load on whatever unfortunate thread gets the first request; but for now I think it's fine.

    Read the article

  • Understanding the SQL Server 2008 R2 Installation Center

    - by Enrique Lima
    What is available to us through those links?  Have you taken the time to explore and identify what could be useful to you? One of many gems that has come to my attention is the possibility of provisioning SQL Server to work in an image based environment (hint: Virtualization Template perhaps?!?).   Planning: Includes requirements information, documentation, how to guides, online documentation installation and other tools. Among the other tools you will find the System Configuration checker and The Upgrade Advisor. Both tools very important to ensure your deployment and installation would be successful.     Installation:  This sections focuses on getting installations going, from standalone to cluster when it comes to new instances.  Add new nodes to an existing cluster, and also perform upgrades (in this case to SQL Server 2008 R2).  Also part of this is the option to find updates available.   Maintenance: We find in this section, options that will assist us in tasks like repairing corrupt installations to removing nodes from a cluster. An option that is interesting (and we should discuss benefits later in another post) is to be able to do an Edition Upgrade, this is a feature expansion and addition based on your product installation (Developer to Enterprise, for example)   Tools:  From the System Configuration Checker to identify readiness for deployment in a successful manner, to being able to report on features installed.  And being able to run upgrades of existing packages developed in the 2005 offering to the 2008 R2 release for SSIS.   Resources: Useful and essential links to gather information and guidance.   Advanced: Here is where it gets interesting.  I break this down into 3 main groups: Installation Automation: When you install SQL Server there is a configuration file that gets dropped (ConfigurationFile.ini) that would allow for you to perform automated installations.  There are switches and options that go with this to have that process working. Cluster configuration for Sysprep: Create images that are cluster ready, 2 options, start the prep work, and then the complete once at the final destination. Stand-alone configuration for Sysprep:  Like the clustering counterpart, 2 options, prep and complete.  Giving you the option to create standard templates for your SQL Server deployments. I find it fitting that the 3 topics listed here should (and will) be additional topics I will discuss.   Options: Very clear and specific about what this means. Select the Processor Type or the Installation Media Root Path.

    Read the article

  • ClearTrace Performance on 170GB of Trace Files

    - by Bill Graziano
    I’ve always worked to make ClearTrace perform well.  That’s probably because I spend so much time watching it work.  I’m often going through two or three gigabytes of trace files but I rarely get the chance to run it on a really large set of files. One of my clients wanted to run a full trace for a week and then analyze the results.  At the end of that week we had 847 200MB trace files for a total of nearly 170GB. I regularly use 200MB trace files when I monitor production systems.  I usually get around 300,000 statements in a file that size if it’s mostly stored procedures.  So those 847 trace files contained roughly 250 million statements.  (That’s 730 bytes per statement if you’re keeping track.  Newer trace files have some compression in them but I’m not exactly sure what they’re doing.)  On a system running 1,000 statements per second I get a new file every five minutes or so. It took 27 hours to process these files on an older development box.  That works out to 1.77MB/second.  That means ClearTrace processed about 2,654 statements per second. You can query the data while you’re loading it but I’ve found it works better to use a second instance of ClearTrace to do this.  I’m not sure why yet but I think there’s still some dependency between the two processes. ClearTrace is almost always CPU bound.  It’s really just a huge, ugly collection of regular expressions.  It only writes a summary to its database at the end of each trace file so that usually isn’t a bottleneck.  At the end of this process, the executable was using roughly 435MB of RAM.  Certainly more than when it started but I think that’s acceptable. The database where all this is stored started out at 100MB.  After processing 170GB of trace files the database had grown to 203MB.  The space savings are due to the “datawarehouse-ish” design and only storing a summary of each trace file. You can download ClearTrace for SQL Server 2008 or test out the beta version for SQL Server 2012.  Happy Tuning!

    Read the article

  • Oracle Open World - the technologist's fall classic

    - by user581320
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Well its September and the calendar is charging towards fall. I realized today that another summer had passed as the first rain in months fell here in northern California redwood country. If its fall that means that Oracle Open World is right around the corner and that tens of thousands of technology professionals will soon converge on San Francisco to fill every hotel and every corner of their minds. As in years past, team UPK will be there in force to answer questions, demo and discuss all things UPK. On Thursday, October 4, from 12:45pm to 1:45pm I along with several of my UPK teammates will be manning the User Productivity Kit Panel - Best Practices to Manage and Deploy Content. This session is an interactive session intended as an opportunity to get your UPK questions answered. To get things started we will answer some questions submitted in advance. If you have any questions or subject areas you’d like addressed during the session, let me know here in the blog and then come to the session and we’ll do our best to answer your questions.  Peter Maravelias UPK Product Management

    Read the article

  • BoundingBox Intersection Problems

    - by Deukalion
    When I try to render two cubes, same sizes, one beside the other. With the same proportions (XYZ). My problem is, why do a Box1.BoundingBox.Contains(Box2.BoundingBox) == ContaintmentType.Intersects - when it clearly doesn't? I'm trying to place objects with BoundingBoxes as "intersection" checking, but this simple example clearly shows that this doesn't work. Why is that? I also try checking height of the next object to be placed, by checking intersection, adding each boxes height += (Max.Y - Min.Y) to a Height value, so when I add a new Box it has a height value. This works, but sometimes due to strange behavior it adds extra values when there isn't anything there. This is an example of what I mean: BoundingBox box1 = GetBoundaries(new Vector3(0, 0, 0), new Vector3(128, 64, 128)); BoundingBox box2 = GetBoundaries(new Vector3(128, 0, 0), new Vector3(128, 64, 128)); if (box1.Contains(box2) == ContainmentType.Intersects) { // This will be executed System.Windows.Forms.MessageBox.Show("Intersects = True"); } if (box1.Contains(box2) == ContainmentType.Disjoint) { System.Windows.Forms.MessageBox.Show("Disjoint = True"); } if (box1.Contains(box2) == ContainmentType.Contains) { System.Windows.Forms.MessageBox.Show("Contains = True"); } Test Method: public BoundingBox GetBoundaries(Vector3 position, Vector3 size) { Vector3[] vertices = new Vector3[8]; vertices[0] = position + new Vector3(-0.5f, 0.5f, -0.5f) * size; vertices[1] = position + new Vector3(-0.5f, 0.5f, 0.5f) * size; vertices[2] = position + new Vector3(0.5f, 0.5f, -0.5f) * size; vertices[3] = position + new Vector3(0.5f, 0.5f, 0.5f) * size; vertices[4] = position + new Vector3(-0.5f, -0.5f, -0.5f) * size; vertices[5] = position + new Vector3(-0.5f, -0.5f, 0.5f) * size; vertices[6] = position + new Vector3(0.5f, -0.5f, -0.5f) * size; vertices[7] = position + new Vector3(0.5f, -0.5f, 0.5f) * size; return BoundingBox.CreateFromPoints(vertices); } Box 1 should start at x -64, Box 2 should start at x 64 which means they never overlap. If I add Box 2 to 129 instead it creates a small gap between the cubes which is not pretty. So, the question is how can I place two cubes beside eachother and make them understand that they do not overlap or actually intersect? Because this way I can never automatically check for intersections or place cube beside eachother.

    Read the article

  • How can I cleanly and elegantly handle data and dependancies between classes

    - by Neophyte
    I'm working on 2d topdown game in SFML 2, and need to find an elegant way in which everything will work and fit together. Allow me to explain. I have a number of classes that inherit from an abstract base that provides a draw method and an update method to all the classes. In the game loop, I call update and then draw on each class, I imagine this is a pretty common approach. I have classes for tiles, collisions, the player and a resource manager that contains all the tiles/images/textures. Due to the way input works in SFML I decided to have each class handle input (if required) in its update call. Up until now I have been passing in dependencies as needed, for example, in the player class when a movement key is pressed, I call a method on the collision class to check if the position the player wants to move to will be a collision, and only move the player if there is no collision. This works fine for the most part, but I believe it can be done better, I'm just not sure how. I now have more complex things I need to implement, eg: a player is able to walk up to an object on the ground, press a key to pick it up/loot it and it will then show up in inventory. This means that a few things need to happen: Check if the player is in range of a lootable item on keypress, else do not proceed. Find the item. Update the sprite texture on the item from its default texture to a "looted" texture. Update the collision for the item: it might have changed shape or been removed completely. Inventory needs to be updated with the added item. How do I make everything communicate? With my current system I will end up with my classes going out of scope, and method calls to each other all over the place. I could tie up all the classes in one big manager and give each one a reference to the parent manager class, but this seems only slightly better. Any help/advice would be greatly appreciated! If anything is unclear, I'm happy to expand on things.

    Read the article

  • Asynchronously returning a hierarchal data using .NET TPL... what should my return object "look" like?

    - by makerofthings7
    I want to use the .NET TPL to asynchronously do a DIR /S and search each subdirectory on a hard drive, and want to search for a word in each file... what should my API look like? In this scenario I know that each sub directory will have 0..10000 files or 0...10000 directories. I know the tree is unbalanced and want to return data (in relation to its position in the hierarchy) as soon as it's available. I am interested in getting data as quickly as possible, but also want to update that result if "better" data is found (better means closer to the root of c:) I may also be interested in finding all matches in relation to its position in the hierarchy. (akin to a report) Question: How should I return data to my caller? My first guess is that I think I need a shared object that will maintain the current "status" of the traversal (started | notstarted | complete ) , and might base it on the System.Collections.Concurrent. Another idea that I'm considering is the consumer/producer pattern (which ConcurrentCollections can handle) however I'm not sure what the objects "look" like. Optional Logical Constraint: The API doesn't have to address this, but in my "real world" design, if a directory has files, then only one file will ever contain the word I'm looking for.  If someone were to literally do a DIR /S as described above then they would need to account for more than one matching file per subdirectory. More information : I'm using Azure Tables to store a hierarchy of data using these TPL extension methods. A "node" is a table. Not only does each node in the hierarchy have a relation to any number of nodes, but it's possible for each node to have a reciprocal link back to any other node. This may have issues with recursion but I'm addressing that with a shared object in my recursion loop. Note that each "node" also has the ability to store local data unique to that node. It is this information that I'm searching for. In other words, I'm searching for a specific fixed RowKey in a hierarchy of nodes. When I search for the fixed RowKey in the hierarchy I'm interested in getting the results FAST (first node found) but prefer data that is "closer" to the starting point of the hierarchy. Since many nodes may have the particular RowKey I'm interested in, sometimes I may want to get a report of ALL the nodes that contain this RowKey.

    Read the article

  • Trying to use stencils in 2D while retaining layer depth

    - by Steve
    This is a screen of what's going on just so you can get a frame of reference. http://i935.photobucket.com/albums/ad199/fobwashed/tilefloors.png The problem I'm running into is that my game is slowing down due to the amount of texture swapping I'm doing during my draw call. Since walls, characters, floors, and all objects are on their respective sprite sheet containing those types, per tile draw, the loaded texture is swapping no less than 3 to 5+ times as it cycles and draws the sprites in order to layer properly. Now, I've tried throwing all common objects together into their respective lists, and then using layerDepth drawing them that way which makes things a lot better, but the new problem I'm running into has to do with the way my doors/windows are drawn on walls. Namely, I was using stencils to clear out a block on the walls that are drawn in the shape of the door/window so that when the wall would draw, it would have a door/window sized hole in it. This is the way my draw was set up for walls when I was going tile by tile rather than grouped up common objects. first it would check to see if a door/window was on this wall. If not, it'd skip all the steps and just draw normally. Otherwise end the current spriteBatch Clear the buffers with a transparent color to preserve what was already drawn start a new spritebatch with stencil settings draw the door area end the spriteBatch start a new spritebatch that takes into account the previously set stencil draw the wall which will now be drawn with a hole in it end that spritebatch start a new spritebatch with the normal settings to continue drawing tiles In the tile by tile draw, clearing the depth/stencil buffers didn't matter since I wasn't using any layerDepth to organize what draws on top of what. Now that I'm drawing from lists of common objects rather than tile by tile, it has sped up my draw call considerably but I can't seem to figure out a way to keep the stencil system to mask out the area a door or window will be drawn into a wall. The root of the problem is that when I end a spriteBatch to change the DepthStencilState, it flattens the current RenderTarget and there is no longer any depth sorting for anything drawn further down the line. This means walls always get drawn on top of everything regardless of depth or positioning in the game world and even on top of each other as the stencil has to happen once for every wall that has a door or window. Does anyone know of a way to get around this? To boil it down, I need a way to draw having things sorted by layer depth while also being able to stencil/mask out portions of specific sprites.

    Read the article

  • Is it appropriate to try to control the order of finalization?

    - by Strilanc
    I'm writing a class which is roughly analogous to a CancellationToken, except it has a third state for "never going to be cancelled". At the moment I'm trying to decide what to do if the 'source' of the token is garbage collected without ever being set. It seems that, intuitively, the source should transition the associated token to the 'never cancelled' state when it is about to be collected. However, this could trigger callbacks who were only kept alive by their linkage from the token. That means what those callbacks reference might now in the process of finalization. Calling them would be bad. In order to "fix" this, I wrote this class: public sealed class GCRoot { private static readonly GCRoot MainRoot = new GCRoot(); private GCRoot _next; private GCRoot _prev; private object _value; private GCRoot() { this._next = this._prev = this; } private GCRoot(GCRoot prev, object value) { this._value = value; this._prev = prev; this._next = prev._next; _prev._next = this; _next._prev = this; } public static GCRoot Root(object value) { return new GCRoot(MainRoot, value); } public void Unroot() { lock (MainRoot) { _next._prev = _prev; _prev._next = _next; this._next = this._prev = this; } } } intending to use it like this: Source() { ... _root = GCRoot.Root(callbacks); } void TransitionToNeverCancelled() { _root.Unlink(); ... } ~Source() { TransitionToNeverCancelled(); } but now I'm troubled. This seems to open the possibility for memory leaks, without actually fixing all cases of sources in limbo. Like, if a source is closed over in one of its own callbacks, then it is rooted by the callback root and so can never be collected. Presumably I should just let my sources be collected without a peep. Or maybe not? Is it ever appropriate to try to control the order of finalization, or is it a giant warning sign?

    Read the article

  • The Problem Should Define the Process, Not the Tool

    - by thatjeffsmith
    All around awesome tool, but not the only gadget in your toolbox.I’m stepping down from my SQL Developer pulpit today and standing up on my philosophical soap box. I’m frequently asked to help folks transition from one set of database tools over to Oracle SQL Developer, which I’m MORE than happy to do. But, I’m not looking to simply change the way people interact with Oracle database. What I care about is your productivity. Is there a faster, more efficient way for you to connect the dots, get from A to B, or just get home to your kids or to the pub for happy hour? If you have defined a business process around a specific tool, what happens when that tool ‘goes away?’ Does the business stop? No, you feel immediate pain until you are able to re-implement the process using another mechanism. Where I get confused, or even frustrated, is when someone asks me to redesign our tool to match their problem. Tools are just tools. Saying you ‘can’t load your data anymore because XYZ’ isn’t valid when you could easily do that same task via SQL*Loader, Create Table As Selects, or 9 other different mechanisms. Sometimes changes brings opportunity for improvement in the process. Don’t be afraid to step back and re-evaluate a problem with a fresh set of eyes. Just trying to replicate your process in another tool exactly as it was done in the ‘old tool’ doesn’t always make sense. Quick sidebar: scheduling a Windows program to kick off thousands if not millions of table inserts from Excel versus using a ‘proper’ server process using SQL*Loader and or external tables means sacrificing scalability and reliability for convenience. Don’t let old habits blind you to new solutions and possibilities. Of couse I’m not going to sit here and say that our tools aren’t deficient in some areas or can’t be improved upon. But I bet if we work together we can find something that’s not only better for the business, but is also better for you. What do you ‘miss’ since you’ve started using SQL Developer as your primary Oracle database tools? I’d love to start a thread here and share ideas on how we can better serve you and your organizations needs. The end solution might not look exactly what you have in mind starting out, but I had no idea I’d be a Product Manager when I started college either What can you no longer ‘do’ since you picked up SQL Developer? What hurts more than it should? What keeps you from being great versus just good?

    Read the article

  • Criteria for a programming language to be considered "mature"

    - by Giorgio
    I was recently reading an answer to this question, and I was struck by the statement "The language is mature". So I was wondering what we actually mean when we say that "A programming language is mature"? Normally, a programming language is initially developed out of a need, e.g. Try out / implement a new programming paradigm or a new combination of features that cannot be found in existing languages. Try to solve a problem or overcome a limitation of an existing language. Create a language for teaching programming. Create a language that solves a particular class of problems (e.g. concurrency). Create a language and an API for a special application field, e.g. the web (in this case the language might reuse a well-known paradigm, but the whole API must be new). Create a language to push your competitor out of the market (in this case the creator might want the new language to be very similar to an existing one, in order to attract developers to the new programming language and platform). Regardless of what the original motivation and scenario in which a language has been created, eventually some languages are considered mature. In my intuition, this means that the language has achieved (at least one of) its goals, e.g. "We can now use language X as a reliable tool for writing web applications." This is however a bit vague, so I wanted to ask what you consider the most important criteria (if any) that are applied when saying that a language is mature. IMPORTANT NOTE This question is (on purpose) language-agnostic because I am only interested in general criteria. Please write only language-agnostic answers and comments! I am not asking whether any specific "language X is mature" or "which programming languages can be considered mature", or whether "language X is more mature than language Y": please avoid posting any opinions or reference about any specific languages because these are out of the scope of this question. EDIT To make the question more precise, by criteria I mean such things as "tool support", "adoption by the industry", "stability", "rich API", "large user community", "successful application record", "standardization", "clean and uniform semantics", and so on.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 13

    - by MarkPearl
    Learning Outcomes Explain the advantages of using a large number of registers Discuss the way in which compilers optimize register usage Discuss the evolution of CISC machines Describe the characteristics of RISC architecture Discuss the RISC vs. CISC controversy Describe the way in which RISC and CISC design principles can be combined Instruction Execution Characteristics To understand the the line of reasoning of RISC advocates, we need a brief overview of instruction execution characteristics. These include… Operations Operands Procedure Calls These three sections can be studied in depth in the textbook at pages 503 - 505 A number of groups have come up with the conclusion that the attempt to make the instruction set architecture closer to HLLs (High Level Languages) is not the most effective design strategy. Rather HLL’s can be best supported by optimizing performance of the most time-consuming features of typical HLL programs. Generally 3 main characteristics came up to improve performance… Use a large number of registers or use a compiler to optimize register usage Careful attention needs to be paid to the design of instruction pipelines A simplified (reduced) instruction set is indicated The use of a large register optimization One of the most important design principles of RISC machines is the use of a large number of registers. The concept of register windows and the use of a large register file versus the use of cache memory are discussed. On the face of it, the use of a large set of registers should decrease the need to access memory. The design task is to organize the registers in such a fashion that this goal is realized. Read page 507 – 510 for a detailed explanation. Compiler-based register optimization   Reduced Instructions Set Architecture There are two advantages to smaller programs… Because the program takes up less memory, there is a savings in that resource (this was more compelling when memory was more expensive) Smaller programs should improve performance, and this will happen in two ways – fewer instructions means fewer instruction bytes to be fetched and in a paging environment smaller programs occupy fewer pages, reducing page faults. Certain characteristics are common to RISC processors… One instruction per cycle Register-to-register operations Simple addressing modes Simple instruction formats RISC vs. CISC After initial enthusiasm for RISC machines, there has been a growing realization that RISC designs may benefit from the inclusion of some CISC features CISC designs may benefit from the inclusion of some RISC features

    Read the article

  • Play Framework Plugin for NetBeans IDE

    - by Geertjan
    The start of minimal support for the Play Framework in NetBeans IDE 7.3 Beta would constitute (1) recognizing Play projects, (2) an action to run a Play project, and (3) classpath support. Well, most of that I've created already, as can be seen, e.g., below you can see logical views in the Projects window for Play projects (i.e., I can open all the samples that come with the Play distribution). Right-clicking a Play project lets you run it and, if the embedded browser is selected in the Options window, you can see the result in the IDE. Make a change to your code and refresh the browser, which immediately shows you your changes: What needs to be done, among other things: A wizard for creating new Play projects, i.e., it would use the Play command line to create the application and then open it in the IDE. Integration of everything available on the Play command line. Maybe the logical view, i.e., what is shown in the Projects window, should be changed. Right now, only the folders "app" and "test" are shown there, with everything else accessible in the Files window, as can be seen in the screenshot above. More work on the classpath, i.e., I've hardcoded a few things just to get things to work correctly. Options window extension to register the Play executable, instead of the current hardcoded solution. Scala integrations, i.e., investigate if/how the NetBeans Scala plugin is helpful and, if not, create different/additional solutions. E.g., the HTML templates are partly in Scala, i.e., need to embed Scala support into HTML. Hyperlinking in the "routes" file, as well as special support for the "application.conf" file. Anyone interested, especially if you're a Play fan (a "playboy"?), in joining me in working on this NetBeans plugin? I'll be uploading the sources to a java.net repository soon. It will be here, once it has been made publicly accessible: http://java.net/projects/nbplay/sources/nbplay Kind of cool detail is that the NetBeans plugin is based on Maven, which means that you could use any Maven-supporting IDE to work on this plugin.

    Read the article

  • The type of programmer I want to be [closed]

    - by Aventinus_
    I'm an undergraduate Software Engineer student, although I've decided that pure programming is what I want to do for the rest of my life. The thing is that programming is a vast field and although most of its aspects are extremely interesting, soon or later I'll have to choose one (?) to focus on. I have several ideas on small projects I'd like to develop this summer, having in mind that this will gain me some experience and, in the best scenario, some cash. But the most important reason I'd like to develop something close to “professional” is to give myself direction on what I want to do as a programmer. One path is that of the Web Programmer. I enjoy PHP and MySQL, as well as HTML and CSS, although I don't really like ASP.NET. I can see myself writing web apps, using the above technologies, as well as XML and Javascript. I also have a neat idea on a Facebook app. The other path is that of the Desktop Programmer. This is a little more complicated cause I really-really enjoy high level languages such as Java and Python but not the low level ones, such as C. I use both Linux and Windows for the last 6 years and I like their latest DEs (meaning Gnome Shell and Metro). I can see myself writing desktop applications for both OSs as long as it means high level programming. Ideally I'd like being able to help the development of GNOME. The last path that interests me is the path of the Smartphone Programmer. I have created some sample applications on Android and due to Java I found it a quite interesting experience. I can also see myself as an independent smartphone developer. These 3 paths seem equally interesting at the moment due to the shallowness of my experience, I guess. I know that I should spend time with all of them and then choose the right one for me but I'd like to know what are the pros and cons in terms of learning curve, fun, job finding and of course financial rewards with each of these paths. I have fair or basic understanding of the languages/technologies I described earlier and this question will help me choose where to focus, at least for now.

    Read the article

  • Why do some user agents have spam urls in them (and why are they always Opera/Presto User-Agents)?

    - by Erx_VB.NExT.Coder
    If you go to (say) the last 100 entries (visits) to the botsvsbrowsers.com website (exact link, feel free to take a look: http://www.botsvsbrowsers.com/recent/listings/index.html ), you'd notice that almost every User Agent that has the keywords "Opera" and "Presto" inside them, will almost certainly have a web link (URL/Web Address) inside it, and it won't just be a normal web address, but a HTML anchor tag/link to that address. Why is this so, I could not even find a single discussion about it on the internet, nowhere, I tried varying my search terms many times. If the user agent contains the words "Opera" and "Presto" it doesnt mean it will have this weblink, but it means there is about an 80% change that it will. A typical anchor tag/link inside a user agent will look like this: Mozilla/4.0 <a href="http://osis-uk.co.uk/disabled-equipment">disability equipment</a> (Windows NT 5.1; U; en) Presto/2.10.229 Version/11.60 If you check it out at the website, http://www.botsvsbrowsers.com/recent/listings/index.html you will notice that the back and forward arrows are in there unescaped format. This isn't just true for botsvsbrowsers, but several other user agent listing sites. I'm really confused and feel line I'm in a room full of 10,000 people and am the only one seeing this ghost :). If I'm doing statistical analysis, should I include or exclude this type of user agent from my listing (ie: are these just normal users who've set their user agents to attempt to drive some traffic to their sites as they browser the web), or is there something else going on? The fact that it is so consistent in terms of its format leads me to believe that it is an automated process (the setting or alteration of the user agent) so I cannot decide or understand the process by which this change is made (I know how to change a user agent), but unsure which program or facility is doing this, especially since it is exclusive to Opera (Presto) user agents that are beyond I think an 8 or 9 point something browser version. I've run some statistical tests, parsing entries from all over the place, writing custom programs, to get a better understanding of this. Keep in mind that I see normal URL's in user agents infrequently, they are just text such as +http://www.someSite.com appended to a user agent normally, especially if its a crawler or bot it provided its service URL, this is normal and isnt done with an embedded link (A HREF=) etc, so I'm not talking about "those".

    Read the article

  • How can you tell whether to use Composite Pattern or a Tree Structure, or a third implementation?

    - by Aske B.
    I have two client types, an "Observer"-type and a "Subject"-type. They're both associated with a hierarchy of groups. The Observer will receive (calendar) data from the groups it is associated with throughout the different hierarchies. This data is calculated by combining data from 'parent' groups of the group trying to collect data (each group can have only one parent). The Subject will be able to create the data (that the Observers will receive) in the groups they're associated with. When data is created in a group, all 'children' of the group will have the data as well, and they will be able to make their own version of a specific area of the data, but still linked to the original data created (in my specific implementation, the original data will contain time-period(s) and headline, while the subgroups specify the rest of the data for the receivers directly linked to their respective groups). However, when the Subject creates data, it has to check if all affected Observers have any data that conflicts with this, which means a huge recursive function, as far as I can understand. So I think this can be summed up to the fact that I need to be able to have a hierarchy that you can go up and down in, and some places be able to treat them as a whole (recursion, basically). Also, I'm not just aiming at a solution that works. I'm hoping to find a solution that is relatively easy to understand (architecture-wise at least) and also flexible enough to be able to easily receive additional functionality in the future. Is there a design pattern, or a good practice to go by, to solve this problem or similar hierarchy problems? EDIT: Here's the design I have: The "Phoenix"-class is named that way because I didn't think of an appropriate name yet. But besides this I need to be able to hide specific activities for specific observers, even though they are attached to them through the groups. A little Off-topic: Personally, I feel that I should be able to chop this problem down to smaller problems, but it escapes me how. I think it's because it involves multiple recursive functionalities that aren't associated with each other and different client types that needs to get information in different ways. I can't really wrap my head around it. If anyone can guide me in a direction of how to become better at encapsulating hierarchy problems, I'd be very glad to receive that as well.

    Read the article

  • "You are missing the following 32-bit libraries, and Steam may not run: libc.so.6" The common fixes don't work,

    - by M_Steam_User
    So I know this is a problem that has been asked around a lot, but I've tried a bunch of solutions with no success. I'm running Ubuntu 12.04 (64 bit), and I just installed it yesterday. This is my first time working with linux. The error is: You are missing the following 32-bit libraries, and Steam may not run: libc.so.6 Things I've tried. First, I had downloaded from the steam website. I uninstalled it, and tried again from the ubuntu software centre. sudo apt-get update sudo apt-get install ia32-libs sudo apt-get upgrade This installed a bunch of the 32 bit libraries, but did not fix the issue. This seems like the major fix for most people. The direct approach of sudo apt-get install libc.so.6 returns this: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package libc.so.6 E: Couldn't find any package by regex 'libc.so.6' I guess libc.so.6 isn't a package, just a single file or something? I also tried gksudo gedit /etc/ld.so.conf.d/steam.conf Added these two lines, those the second one was all ready in the file, but copied over: /usr/lib32 /usr/lib/i386-linux-gnu/mesa Then executed: sudo ldconfig But nothing seemed to happen, steam still doesn't work. So, I feel like it is more likely that I have the library and steam isn't looking in the right place. One thing I've seen is people usually reference /usr/local/lib/ for your library locations. However, I can't find where to cd into /usr/, it isn't in my home folder. If /usr/ is the home folder, there is only a /.local folder which only has /share, no lib anywhere. Sorry for my linux ignorance. I appreciate any help, I honestly have no idea how to confirm I have the library and point steam to it, or if that is even the right thing to do. Edit: Tried this, not entirely sure what it means ~$ ls -l /lib32/libc* -rwxr-xr-x 1 root root 1721832 Sep 30 11:06 /lib32/libc-2.15.so -rw-r--r-- 1 root root 185928 Sep 30 11:06 /lib32/libcidn-2.15.so lrwxrwxrwx 1 root root 15 Sep 30 11:06 /lib32/libcidn.so.1 -> libcidn-2.15.so -rw-r--r-- 1 root root 34316 Sep 30 11:06 /lib32/libcrypt-2.15.so lrwxrwxrwx 1 root root 16 Sep 30 11:06 /lib32/libcrypt.so.1 -> libcrypt-2.15.so lrwxrwxrwx 1 root root 12 Sep 30 11:06 /lib32/libc.so.6 -> libc-2.15.so

    Read the article

  • reference list for non-IT driven algorithmic patterns

    - by Quicker
    I am looking for a reference list for non-IT driven algorithmic patterns (which still can be helped with IT implementations of IT). An Example List would be: name; short desc; reference Travelling Salesman; find the shortest possible route on a multiple target path; http://en.wikipedia.org/wiki/Travelling_salesman_problem Ressource Disposition (aka Regulation); Distribute a limited/exceeding input on a given number output receivers based on distribution rules; http://database-programmer.blogspot.de/2010/12/critical-analysis-of-algorithm-sproc.html If there is no such list, but you instantly think of something specific, please 'put it on the desk'. Maybe I can compile something out of the input I get here (actually I am very frustrated as I did not find any such list via research by myself). Details on Scoping: I found it very hard to formulate what I want in a way everything is out that I do not need (which may be the issue why I did not find anything at google). There is a database centric definition for what I am looking for in the section 'Processes' of the second example link. That somehow fits, but the database focus sort of drifts away from the pattern thinking, which I have in mind. So here are my own thoughts around what's in and what's out: I am NOT looking for a foundational algo ref list, which is implemented as basis for any programming language. Eg. the php reference describes substr and strlen. That implements algos, but is not what I am looking for. the problem the algo does address would even exist, if there were no computers (or other IT components) the main focus of the algo is NOT to help other algo's chances are high, that there are implementions of the solution or any workaround without any IT support out there in the world however the algo could be benefitialy implemented/fully supported by a software application = means: the problem of the algo has to be addressed anyway, but running an algo implementation with software automates the process (that is why I posted it on stackoverflow and not somewhere else) typically such algo implementations have more than one input field value and more than one output field value - which implies it could not be implemented as simple function (which is fixed to produce not more than one output value) in a normalized data model often times such algo implementation outputs span accross multiple rows (sometimes multiple tables), whereby the number of output rows depends on the input paraters and rows in the table(s) at start time - which implies that any algo implementation/procedure must interact with a database (read and/or write) I am mainly looking for patterns, not for specific implementations. Example: The Travelling Salesman assumes any coordinates, however it does not say: You need a table targets with fields x and y. - however sometimes descriptions are focussed on examples with specific implementations very much - no worries, as long as the pattern gets clear

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >