Search Results

Search found 95445 results on 3818 pages for 'volume one'.

Page 351/3818 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Simple C: atof giving wrong value [migrated]

    - by Doc
    I have a program that reads input from a singe line(string obviously) and organizes it into arrays. The problem I have is that at one point the program reads two different values and returns the first one twice. Initially I thought the program was reading the same value twice but when I tested it turned out that it got the correct one but is inputting the wrong one. for example Input: 2 0.90 0.75 0.7 0.65 sorry to snip it (while(fgets (string[test], sizeof(string[test]),ifp)) pch = strtok_r(NULL, " ", &prog); tem3 = atoi(pch); while (loop<tem3) { pch=strtok_r(NULL," ",&prog); venseatfloat[test][loop][DISCOUNT][OCCUPIED]=(float)atof(pch); printf("%f is discount\t",venseatfloat[test][loop][DISCOUNT][OCCUPIED]); pch=strtok_r(NULL, " ", &prog); strcpy(temp, pch); venseatfloat[test][loop][REGULAR][OCCUPIED]=(float)atof(pch); printf("%s is the string but %.3f is regular\n", temp ,venseatfloat[test][loop][DISCOUNT][OCCUPIED]); loop++; } output: >0.900000 is discount 0.75 is the string but 0.900 is regular >0.700000 is discount 0.65 is the string but 0.700 is regular What is going on?

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Algorithms for Data Redundancy and Failover for distributed storage system?

    - by kennetham
    I'm building a distributed storage system that works with different storage sizes. For instance, my storage devices have sizes of 50GB, 70GB, 150GB, 250GB, 1000GB, 5 storage systems in one system. My application will store any files to the storage system. Question: How can I build a distributed storage with the idea of data redundancy and fail-over to store documents, videos, any type of files at the same time ensuring that should one of any storage devices fail, there would be another copy of these files on another storage device. However, the concern is, 50GB of storage can only store this maximum number of files as compared to 70GB, 150GB etc. With one storage in mind, bringing 5 storage systems like a cloud storage, is there any logical way to distribute or store the files through my application? How do I ensure data redundancy through different storage sizes? Is there any algorithm to collate multiple blob files into a single file archive? What is the best solution for one cloud storage with multiple different storage sizes? I open this topic with the objective of discussing the best way to implement this idea, assuming simplicity, what are the issues of this implementation, performance measurements and discussion of the limitations.

    Read the article

  • Spritegroups and colorkeys

    - by Fristi
    I have a problem using spritegroups in pygame. In my situation I have 2 spritegroups, one for humans, one for "infected". A human is represented by a blue circle: image = pygame.Surface((32,32)) image.fill((255,255,255)) pygame.draw.circle(image,(0,0,255),(16,16),16) image = image.convert() image.set_colorkey((255,255,255)) An infected by a red one (same code, different color). I update my spritegroups as follows: self.humans.clear(self.screen, self.bg) self.humans.update(time_passed) self.humans.draw(self.screen) self.infected.clear(self.screen, self.bg) self.infected.update(time_passed) self.infected.draw(self.screen) Self.bg is defined: self.bg = pygame.Surface((SCREEN_WIDTH, SCREEN_HEIGHT)) self.bg.fill((255,255,255)) self.bg.convert() This all works, except that when a red circle overlaps with a blue one, you can see the white corners of the bounding box around the actual circle. Within a spritegroup it works, using the set_colorkey function. This does not happen with overlapping blue circles or overlapping red circles. I tried adding a colorkey to self.bg but that did not work. Same for adding a colorkey to self.screen.

    Read the article

  • Why don't windows of the same application behave as they should in Oneiric?

    - by Yuttadhammo
    Somewhere along the upgrade path, Unity has developed some strange logic behind window layering. First, before Oneiric, there was a way to see all the windows of an application - I think it was when you click on the icon in the launcher. Now, clicking on the icon often does nothing. Suppose I have two terminals open, one behind this Firefox window, and one in front of it. Clicking on the launcher does nothing - the only way to find the second terminal, afaics, is to move the Firefox window or use the task switcher (which has a whole slew of problems of its own). Secondly, once I have both terminals on top, then I decide to close one of them, suddenly they both disappear (the second one, for some reason, has gone into hiding behind the Firefox window). Third (though I can't pin it down now), sometimes when a window is on top, focus is still on a window in back; I click on the top x to close the window in front, only to find I've closed an important window in the back. I can't really believe these are bugs, since they seem too obvious to not have been fixed by now. My question is, am I missing something? Some compiz option I can set to make it act like it used to? Or is this really how Unity is supposed to act?

    Read the article

  • Mirroring of Apps across servers

    - by user1038814
    We wish to host multiple apps across multiple servers. What we are looking for (ideally) is an existing solution which will work. For example, normally to do it we'd follow a route (for failover) like: App is installed on one server along with mysql database App is also installed on a second server. Rsync is used to mirror the files over to the second server and ensure consistency MySQL is installed with a Master-Slave setup. We use a service such as DNS Made Easy which has a DNS failover. If one server goes down it automatically routes traffic to the backup server We have done the above a few times and generally its fine. The issue I have here is that the above is for one app. What I would like to look at is how we can manage for multiple apps and if there is a layer (such as VMWare) that has complete mirroring built in at the OS level? For example how do web hosts currently do it when they ensure that more than one machine is running a bunch of hosted websites. If you were running hosting and you had 200 clients on a server you would want the same clients across 2 or more servers and want everything mirrored. Any advice would be much appreciated.

    Read the article

  • Do there exist programming languages where a variable can truly know its own name?

    - by Job
    In PHP and Python one can iterate over the local variables and, if there is only once choice where the value matches, you could say that you know what the variable's name is, but this does not always work. Machine code does not have variable names. C compiles to assembly and does not have any native reflection capabilities, so it would not know it's name. (Edit: per Anton's answer the pre-processor can know the variable's name). Do there exist programming languages where a variable would know it's name? It gets tricky if you do something like b = a and b does not become a copy of a but a reference to the same place. EDIT: Why in the world would you want this? I can think of one example: error checking that can survive automatic refactoring. Consider this C# snippet: private void CheckEnumStr(string paramName, string paramValue) { if (paramName != "pony" && paramName != "horse") { string exceptionMessage = String.Format( "Unexpected value '{0}' of the parameter named '{1}'.", paramValue, paramName); throw new ArgumentException(exceptionMessage); } } ... CheckEnumStr("a", a); // Var 'a' does not know its name - this will not survive naive auto-refactoring There are other libraries provided by Microsoft and others that allow to check for errors (sorry the names have escaped me). I have seen one library which with the help of closures/lambdas can accomplish error checking that can survive refactoring, but it does not feel idiomatic. This would be one reason why I might want a language where a variable knows its name.

    Read the article

  • Dual boot :Windows 7 partition deleted after Kubuntu 14.04 install...Weird!

    - by user292152
    I've bought two new SSD's in order to install Kubuntu on one and Win 7 on the other one. Before I had Linux Mint and Win7 together one just one SSD. So first I installed win7 as recommended, and then used the guided installer of Kubuntu to install Kubuntu. I selected the second SSD, chose the option "use entire disk and install", but to my surprise after rebooting and selecting win7 boot loader from grub2, I got a prompt that my windows installation is damaged, and I need to run the repair option from the installation disk. So I booted into Kubuntu again, fired up kparted and saw that indeed my windows partition got deleted, except the recovery partition. I don't understand what happened. I am not new to this topic, and this was not my first time installing Ubuntu alongside windows. I have never ever had that problem. What can I do to make sure this won't happen again, so I won't waste another 2 hours of my life? ?? Thanks a lot !

    Read the article

  • Attempting to install ubuntu 11.10

    - by Orin
    I installed version 9 sometime ago and since have forgotten the process for partitioning, or the layout is different. I have 5 partitions but only have windows xp installed on the pc in question with that being the one of those 5 partitions which is ntfs 34444 mb its - a 40gig hard-drive. My first question is... is there a way to get a screen shot of the partitioner when I am running the demo session straight from disc... these 5 partitions are fragmenting the other 4ish gig needed to install.. I get an error message which says go back and make sure 1 partition has at least 2.5 gig or so. But I have no idea what I am supposed to set these remaining 4 partitions to in order to proceed.. I have read up on install guides and understand that one must be "/" root and another as swap... but to no avail thus far have the correct combo. A few screen shots will no doubt help you guys answer as I'm baffled as to what specific details to give as each one has various settings on inspection, and I don't really feel like writing it all down manually then posting specs for each one

    Read the article

  • Where do I read more about building an architecture like Google has? [on hold]

    - by user107148
    I want to develop a program that watches and traverses a rather big network for data. This data should then be available to search through with my program, maybe through a web interface or something. My intuitive thought at the moment is to build an architecture kind of like the one Google has. One "front-end" (the Google Search page) which is essentially a regular web application and one "back-end" (which in Google's case traverses the web). Now for the hard part: If I decide to make such a system, how should communication be done between these parts? One idea I had is to use some kind of database that both the back-end and front-end can access, but then comes the issues of concurrent writes and reads. Another issue with just using a database to communicate is that it makes it hard to "notify" the other part when something changes. Let's say that I want the "front-end" part to push changes to the UI when a change is noticed in the back-end. Then the back-end would have to have some way of notifying the front-end of this.

    Read the article

  • Java single Array best choice for accessing pixels for manipulation?

    - by Petrol
    I am just watching this tutorial https://www.youtube.com/watch?v=HwUnMy_pR6A and the guy (who seems to be pretty competent) is using a single array to store and access the pixels of his to-be-rendered image. I was wondering if this really is the best way to do this. The alternative of Multi-Array does have one pointer more, but Arrays do have an O(1) for accessing each index and calculating the index in a single array seems to take one addition and one multiplication operation per pixel. And if Multi-Arrays really are bad, can't you use something with Hashing to avoid those addition and multiplication operations? EDIT: here is his code... public class Screen { private int width, height; public int[] pixels; public Screen(int width, int height) { this.width = width; this.height = height; // creating array the size of one index/int for every pixel // single array has better performance than multi-array pixels = new int[width * height]; } public void render() { for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { pixels[x + y * width] = 0xff00ff; } } } }

    Read the article

  • How to keep track of user images when using a CDN? [closed]

    - by Programmer
    We are considering moving our user profile images from the local server to the Rackspace CDN (Cloud Files). However, how do you keep track of where each user's profile image is located on the CDN? Wouldn't you have to store the CDN URL for each user image in the local Database and query it every time you display a user image? Isn't that slower than accessing a user image directly on the local server which requires no such DB query to retrieve since you already know where it is stored based on the user's User ID? What if a user has an album of pics? How would you keep track of all those images that belong just to that one user? What about the order of those pics? In the case of the Rackspace CDN, we're looking at using a Container for each individual user to help keep things more logically organized, but we don't know what the best way to track all of it is since the CDN provides a seemingly random URL for each image. To make matters worse, you can't even delete a non-empty Container belonging to a user when they delete their account, you actually have to delete each object inside the Container one-by-one before deleting the Container itself. It doesn't end there, you can't even have nested Containers or "sub-folders", and you can't rename a file (you must copy it with a new name and delete the old one manually). It just sounds so incredibly more complicated then we thought it would be, and it certainly does not feel "intuitive" compared to local storage, so we don't know what to do. Please help.

    Read the article

  • Characteristics, what's the inverse of (x*(x+1))/2? [closed]

    - by Valmond
    In my game you can spend points to upgrade characteristics. Each characteristic has a formula like: A) out = in : for one point spent, one pont gained (you spend 1 point on Force so your force goes from 5 to 6) B) out = last level (starting at 1) : so the first point spent earns you 1 point, the next point spent earns you an additional 2 and so on (+3,+4,+5...) C) The inverse of B) : You need to spend 1 point to earn one, then you need to spend 2 to earn another one and so on. I have already found the formula for calculating the actual level of B when points spent = x : charac = (x*(x+1))/2 But I'd like to know what the "reverse" version of B) (usable for C) is, ie. if I have spent x points, how many have I earned if 1 spent gives 1, 1+2=3 gives 2, 1+2+3=6 gives 3 and so on. I know I can just calculate the numbers but I'd like to have the formula because its neater and so that I can stick it in an excel sheet for example... Thanks! ps. I think I have nailed it down to something like charac = sqrt( x*m +k) but then I'm stuck doing number guessing for k and m and I feel I might be wrong anyway as I get close but never hits the spot.

    Read the article

  • Multiple "pages" in GWT with human friendly URLs

    - by Andreas Borglin
    Hi. I'm playing with a GWT/GAE project which will have three different "pages", although it is not really pages in a GWT sense. The top views (one for each page) will have completely different layouts, but some of the widgets will be shared. One of the pages is the main page which is loaded by the default url (http://www.site.com), but the other two needs additional URL information to differentiate the page type. They also need a name parameter, (like http://www.site.com/project/project-name. There are at least two solutions to this that I'm aware of. Use GWT history mechanism and let page type and parameters (such as project name) be part of the history token. Use servlets with url-mapping patterns (like /project/*) The first choice might seem obvious at first, but it has several drawbacks. First, a user should be able to easily remember and type URL directly to a project. It is hard to produce a human friendly URL with history tokens. Second, I'm using gwt-presenter and this approach would mean that we need to support subplaces in one token, which I'd rather avoid. Third, a user will typically stay at one page, so it makes more sense that the page information is part of the "static" URL. Using servlets solves all these problems, but also creates other ones. So my first questions is, what is the best solution here? If I would go for the servlet solution, new questions pop up. It might make sense to split the GWT app into three separate modules, each with an entry point. Each servlet that is mapped to a certain page would then simply forward the request to the GWT module that handles that page. Since a user typically stays at one page, the browser only needs to load the js for that page. Based on what I've read, this solution is not really recommended. I could also stick with one module, but then GWT needs to find out which page it should display. It could either query the server or parse the URL itself. If I stick with one GWT module, I need to keep the page information stored on server side. Naturally I thought about sessions, but I'm not sure if its a good idea to mix page information with user data. A session usually lives between user login and logout, but in this case it would need different behavior. Would it be bad practise to handle this via sessions? The one GWT module + servlet solution also leads to another problem. If a user goes from a project page to the main page, how will GWT know that this has happened? The app will not be reloaded, so it will be treated as a simple state change. It seems rather ineffecient to have to check page info for every state change. Anyone care to guide me out of the foggy darkness that surrounds me? :-)

    Read the article

  • jquerymobile - include .js and .html

    - by Girija
    Hi, I have described the my problem in the following lines. Kindly clarify me. In my application,I am using more than html page for displaying the content and each page have own .js file. When I call the html page then .js file also included. In the .js,I am using $('div').live('pageshow',function(){}). I am calling the html file from the .js(using $.mobile.changePage("htmlpage")). My problem : consider, I have two html files. second.html file is called with in the one.js. when I show the second.html, that time one.js is reload again. I am getting the alert "one.js" then "second.js" Please help me. Thanks in advance. Please point out my mistake. I have attached the code. one.html <!DOCTYPE html> <html> <head> <title>Page Title</title> <link rel="stylesheet" href="jquery.mobile-1.0a2.min.css" /> <script src="jquery-1.4.3.min.js"></script> <script src="jquery.mobile-1.0a2.min.js"></script> <script src="Scripts/one.js"></script> </head> <body> <div data-role="page"> </div> </body> </html> Second.html <!DOCTYPE html> <html> <head> <title>Sample </title> <link rel="stylesheet" href="../jquery.mobile-1.0a2.min.css" /> <script src="../jquery-1.4.3.min.js"></script> <script src="../jquery.mobile-1.0a2.min.js"></script> <script type="text/javascript" src="Scripts/second.js"></script> </head> <body> <div data-role="page"> <div data-role="button" id="link" >Second</div> </div><!-- /page --> </body> </html> one.js $('div').live('pageshow',function() { alert("one.js"); //AJAX Calling //success result than call the second.html $.mobile.changePage("second.html"); }); second.js $('div').live('pageshow',function(){ { alert('second.js'); //AJAX Calling //success result than call the second.html $.mobile.changePage("third.html"); }); Note : When I show forth.html that time the following files are reload(one.js,second.js,third,js and fourth.js. But I need fourth.js alone). I tried to use the $.document.ready(function(){}); but that time .js did not call. :( :( :(

    Read the article

  • Limiting TCP sends with a "to-be-sent" queue and other design issues.

    - by Poni
    Hello all! This question is the result of two other questions I've asked in the last few days. I'm creating a new question because I think it's related to the "next step" in my understanding of how to control the flow of my send/receive, something I didn't get a full answer to yet. The other related questions are: http://stackoverflow.com/questions/3028376/an-iocp-documentation-interpretation-question-buffer-ownership-ambiguity http://stackoverflow.com/questions/3028998/non-blocking-tcp-buffer-issues In summary, I'm using Windows I/O Completion Ports. I have several threads that process notifications from the completion port. I believe the question is platform-independent and would have the same answer as if to do the same thing on a *nix, *BSD, Solaris system. So, I need to have my own flow control system. Fine. So I send send and send, a lot. How do I know when to start queueing the sends, as the receiver side is limited to X amount? Let's take an example (closest thing to my question): FTP protocol. I have two servers; One is on a 100Mb link and the other is on a 10Mb link. I order the 100Mb one to send to the other one (the 10Mb linked one) a 1GB file. It finishes with an average transfer rate of 1.25MB/s. How did the sender (the 100Mb linked one) knew when to hold the sending, so the slower one wouldn't be flooded? Another way to ask this: Can I get a "hold-your-sendings" notification from the remote side? Is it built-in in TCP or the so called "reliable network protocol" needs me to do so? Again, I have a loop with many sends to a remote server, and at some point, within that loop I'll have to determine if I should queue that send or I can pass it on to the transport layer (TCP). How do I do that? What would you do? Of course that when I get a completion notification from IOCP that the send was done I'll issue other pending sends, that's clear. Another design question related to this: Since I am to use a custom buffers with a send queue, and these buffers are being freed to be reused (thus not using the "delete" keyword) when a "send-done" notification has been arrived, I'll have to use a mutual exlusion on that buffer pool. Using a mutex slows things down, so I've been thinking; Why not have each thread have its own buffers pool, thus accessing it , at least when getting the required buffers for a send operation, will require no mutex, because it belongs to that thread only. The buffers pool is located at the thread local storage (TLS) level. No mutual pool implies no lock needed, implies faster operations BUT also implies more memory used by the app, because even if one thread already allocated 1000 buffers, the other one that is sending right now and need 1000 buffers to send something will need to allocated these to its own. This is a long question and I hope none got hurt (: Thank you all!

    Read the article

  • Windows DD-esque implementation in Qt 5

    - by user1777667
    I'm trying to get into Qt and as a project I want to try and pull a binary image from a hard drive in Windows. This is what I have: QFile dsk("//./PhysicalDrive1"); dsk.open(QIODevice::ReadOnly); QByteArray readBytes = dsk.read(512); dsk.close(); QFile img("C:/out.bin"); img.open(QIODevice::WriteOnly); img.write(readBytes); img.close(); When I run it, it creates the file, but when I view it in a hex editor, it says: ëR.NTFS ..........ø..?.ÿ.€.......€.€.ÿç......UT..............ö.......ì§á.Íá.`....ú3ÀŽÐ¼.|ûhÀ...hf.ˈ...f.>..NTFSu.´A»ªUÍ.r..ûUªu.÷Á..u.éÝ..ƒì.h..´HŠ...‹ô..Í.ŸƒÄ.žX.rá;...uÛ£..Á.....Z3Û¹. +Èfÿ.......ŽÂÿ...èK.+Èwï¸.»Í.f#Àu-f.ûTCPAu$.ù..r..h.».hp..h..fSfSfU...h¸.fa..Í.3À¿(.¹Ø.üóªé_...f`..f¡..f.....fh....fP.Sh..h..´BŠ.....‹ôÍ.fY[ZfYfY..‚..fÿ.......ŽÂÿ...u¼..faàø.è.. û.è..ôëý´.‹ð¬<.t.´.»..Í.ëòÃ..A disk read error occurred...BOOTMGR is missing...BOOTMGR is compressed...Press Ctrl+Alt+Del to restart...Œ©¾Ö..Uª Is there a better way of doing this? I tried running it as admin, but still no dice. Any help would be much appreciated. Update: I changed the code a bit. Now if I specify a dd images as the input it writes the image perfectly, but when I try to use a disk as the input it only writes 0x00. QTextStream(stdout) << "Start\n"; QFile dsk("//./d:"); //QFile dsk("//./PhysicalDrive1"); //QFile dsk("//?/Device/Harddisk1/Partition0"); //QFile dsk("//./Volume{e988ffc3-3512-11e3-99d8-806e6f6e6963}"); //QFile dsk("//./Volume{04bbc7e2-a450-11e3-a9d9-902b34d5484f}"); //QFile dsk("C:/out_2.bin"); if (!dsk.open(QIODevice::ReadOnly)) qDebug() << "Failed to open:" << dsk.errorString(); qDebug() << "Size:" << dsk.size() << "\n"; // Blank out image file QFile img(dst); img.open(QIODevice::WriteOnly); img.write(0); img.close(); // Read and write operations while (!dsk.atEnd()) { QByteArray readBytes = dsk.readLine(); qDebug() << "Reading: " << readBytes.size() << "\n"; QFile img(dst); if (!img.open(QIODevice::Append)) qDebug() << "Failed to open:" << img.errorString(); img.write(readBytes); } QTextStream(stdout) << "End\n"; I guess the real problem I'm having is how to open a volume with QFile in Windows. I tried a few variants, but to no avail.

    Read the article

  • XSD: how to use 'unique' & 'key'/'keyref' with element values?

    - by Koohoolinn
    I trying to use and / with element values but I just can't get it to work. If I do it with attrubute values it works like a charm. Test.xml <test:config xmlns:test="http://www.example.org/Test" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.example.org/Test Test.xsd "> <test:location id="id1" path="/path2"> <test:roles> <test:role>role1</test:role> <test:role>role2</test:role> <test:role>role2</test:role> <!-- DUPLICATE: FAIL VALIDATION --> </test:roles> <test:action name="action1"> <test:roles> <test:role>role1</test:role> <test:role>role1</test:role> <!-- DUPLICATE: FAIL VALIDATION --> <test:role>role3</test:role> <!-- NOT DEFINED: FAIL VALIDATION --> </test:roles> </test:action> </test:location> </test:config> I want ensure that roles are only defined once and that the roles defined under the action element are only those defined at the upper level. Test.xsd <xs:element name="config"> <xs:complexType> <xs:sequence> <xs:element ref="test:location" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="location" type="test:LocationType"> <xs:key name="keyRole"> <xs:selector xpath="test:roles" /> <xs:field xpath="test:role" /> </xs:key> <xs:keyref name="keyrefRole" refer="test:keyRole"> <xs:selector xpath="test:action/test:roles" /> <xs:field xpath="test:role" /> </xs:keyref> </xs:element> <xs:complexType name="LocationType"> <xs:sequence> <xs:element ref="test:roles" minOccurs="0" /> <xs:element name="action" type="test:ActionType" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="id" type="xs:string" use="required"/> <xs:attribute name="path" type="xs:string" use="required"/> </xs:complexType> <xs:element name="roles" type="test:RolesType"> <xs:unique name="uniqueRole"> <xs:selector xpath="." /> <xs:field xpath="test:role" /> </xs:unique> </xs:element> <xs:complexType name="RolesType"> <xs:sequence> <xs:element name="role" type="xs:string" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <xs:complexType name="ActionType"> <xs:sequence> <xs:element ref="test:roles" /> </xs:sequence> <xs:attribute name="name" type="xs:string" use="required" /> </xs:complexType> The validation fails with these messages: Description Resource Path Location Type cvc-identity-constraint.3: Field "./test:role" of identity constraint "keyrefRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 15 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "keyrefRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 16 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "keyRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 9 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "keyRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 10 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "uniqueRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 9 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "uniqueRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 10 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "uniqueRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 15 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "uniqueRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 16 XML Problem cvc-identity-constraint.4.1: Duplicate unique value [role1] declared for identity constraint "uniqueRole" of element "roles". Test.xml /filebrowser-ejb/src/test/resources line 9 XML Problem cvc-identity-constraint.4.1: Duplicate unique value [role1] declared for identity constraint "uniqueRole" of element "roles". Test.xml /filebrowser-ejb/src/test/resources line 15 XML Problem cvc-identity-constraint.4.2.2: Duplicate key value [role1] declared for identity constraint "keyRole" of element "location". Test.xml /filebrowser-ejb/src/test/resources line 9 XML Problem cvc-identity-constraint.4.3: Key 'keyrefRole' with value 'role3' not found for identity constraint of element 'location'. Test.xml /filebrowser-ejb/src/test/resources line 19 XML Problem If I comment out the lines that should fail, validation still fails now with these messages: Description Resource Path Location Type cvc-identity-constraint.3: Field "./test:role" of identity constraint "keyRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 10 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "uniqueRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 10 XML Problem What am I doing wrong?

    Read the article

  • Using javascript to limit survey choices to three unique values

    - by leanne
    I'm required to use a limited survey application, and have to adapt the provided code to meet more advanced functionality. I need to create a weighted ranking question, so users can select their top three choices and the data will go into the survey application and be accessible in the survey reports. The application only supports 2 types of questions (text fill & multiple choice) but I can alter the code, as long as it still sends the form data back to the survey application. The code is set up so it will show a drop-down menu of 0-3 for each option. Now I want to limit the user's choices so they can only select one "1" "2" or "3", three choices total. Ideally, if the user already had "2" selected for one option and they tried to select it for another option, it would set the first "2" as "0" or blank. Is this possible to do with javascript? If so, does anyone know of a site that might show code like this, or provide similar enough examples that I could adapt it? Current code here: <html> <head><title>Survey</title></head> <!-- Changes - remove br to put dropdown next to text for each item. Switch text & dropdown order for each item. - add comments to separate each question - removed blue title font - add instructions Goals - limit choices to one 1 one 2 and one 3, three choices total. --> <link href="---" rel="stylesheet" type="text/css"> <body bgcolor="#3c76a3"> <!-- TRANSITIONAL DIALOG BOX --> <table border="0" align="center" cellpadding="0" cellspacing="0" style="background-attachment: scroll; background-color: #3c76a3; background-repeat: no-repeat; background-position: left top;" bgcolor="#3c76a3" topmargin="0" marginwidth="0" marginheight="0" width="100%" height="100%"> <tr> <td> <table border="0" align="center" cellpadding="0" cellspacing="0" id="survey"> <tr> <td><p>&nbsp;</p> <!-- HEADER END --> <!-- FORM START TAG --><form name="survey" action="---" method="POST"> <FONT face="Verdana, Arial, Helvetica, sans-serif"> <b>survey</b><hr> <!-- 1 --> <input type=hidden name="Buy R.J. a DeLorean_multiple_answers" value="one"> <font size=2><select name="Buy R.J. a DeLorean" SIZE=1> <option value=""> <option value="0">0 <option value="1">1 <option value="2">2 <option value="3">3 </select></font> <input type="hidden" name="Buy R.J. a DeLorean_help" value=""> <b><font size=2>Buy R.J. a DeLorean</font></b> <hr size=1> <!-- 2 --> <input type=hidden name="Fill Lisa's office with marshmallows._multiple_answers" value="one"> <font size=2><select name="Fill Lisa's office with marshmallows." SIZE=1> <option value=""> <option value="0">0 <option value="1">1 <option value="2">2 <option value="3">3 </select></font> <input type="hidden" name="Fill Lisa's office with marshmallows._help" value=""> <b><font size=2>Fill Lisa's office with marshmallows.</font></b> <hr size=1> <!-- 3 --> <input type=hidden name="Install a beer fridge in everyone's filing cabinets._multiple_answers" value="one"> <font size=2><select name="Install a beer fridge in everyone's filing cabinets." SIZE=1> <option value=""> <option value="0">0 <option value="1">1 <option value="2">2 <option value="3">3 </select></font> <input type="hidden" name="Install a beer fridge in everyone's filing cabinets._help" value=""> <b><font size=2>Install a beer fridge in everyone's filing cabinets.</font></b> <hr size=1> <!-- 4 --> <input type=hidden name="Buy a company Cessna_multiple_answers" value="one"> <font size=2><select name="Buy a company Cessna" SIZE=1> <option value=""> <option value="0">0 <option value="1">1 <option value="2">2 <option value="3">3 </select></font> <input type="hidden" name="Buy a company Cessna_help" value=""> <b><font size=2>Buy a company Cessna</font></b><br> <hr size=1> <!-- 5 --> <input type=hidden name="Replace Conf2's chairs with miniature ponies._multiple_answers" value="one"> <font size=2><select name="Replace Conf2's chairs with miniature ponies." SIZE=1> <option value=""> <option value="0">0 <option value="1">1 <option value="2">2 <option value="3">3 </select></font> <input type="hidden" name="Replace Conf2's chairs with miniature ponies._help" value=""> <b><font size=2>Replace Conf2's chairs with miniature ponies.</font></b> <hr size=1> <input type="hidden" name="question_names" value="{Buy R.J. a DeLorean} {Fill Lisa's office with marshmallows.} {Install a beer fridge in everyone's filing cabinets.} {Buy a company Cessna} {Replace Conf2's chairs with miniature ponies.}"> <p align="right"><input type="image" BORDER=0 title="Save Changes" alt="Save Changes" src="---" name="button_save_changes"> <input type="hidden" name="showconfirm" value="T"> <input type="hidden" name="showresults" value="F"> <input type="hidden" name="preventdupesmemberid" value="T"> <input type="hidden" name="preventdupesip" value="F"> <input type="hidden" name="numberquestions" value="F"> <input type="hidden" name="destinationurl" value=""> <input type="hidden" name="original_survey_id" value="62"> <!-- FORM END TAG --></form> <!-- FOOTER START --> </td> </tr> </table> </td> </tr> </table> <!-- END HEADER --> </body> </html>

    Read the article

  • The Complete List of iPad Tips, Tricks, and Tutorials

    - by Ross
    The Apple iPad is the latest new toy, and we’ve put together a comprehensive list of every tip, trick, and tutorial that we could find to help you get the most out of it—and we’re even giving one away to one lucky reader. So read on! Note: We’ll be keeping this page updated as we find more great articles, so you should bookmark this page for future reference. Want Your Own iPad? How-To Geek is Giving One Away! All you have to do to enter is become a fan of our Facebook page, and we’ll pick a random fan to win the prize. Win an iPad on the How-To Geek Facebook Fan Page Disable the “clicking sound” on the iPad Keyboard Does the clicking sound when you tap the iPad keyboard bother you? Thankfully it’s easy to disable with a couple of taps. How to disable the “clicking sound” on your iPad’s keyboard Enable and add bookmarks to the Safari Bookmarks Bar on your iPad By default, Safari doesn’t display the Bookmarks Bar. This tip shows you how to change that. How to enable and add bookmarks to the Safari Bookmarks Bar on your iPad Clear the Cache, History and Cookies in Safari for the iPad You’re probably used to clearing this kind of data right from within the browser. Not so with Safari on the iPad – but here’s how you can. How to clear the cache, history and cookies in Safari for iPad How to add more Apps to your iPad Dock The iPad has four icons in its ‘dock’. Did you know it can hold 6? How to add more Apps to your iPad Dock Convert PDF files to ePub files to read on your iPad with iBooks ePub is the format that iBooks are in. So for those of you with large eBook collections in PDF, here’s how you convert them to read in iBooks. How to convert PDF files to ePub files to read on your iPad with iBooks How to force your iPad to restart Has an app caused your iPad to freeze up, and you can’t escape? This tip shows you how to force your iPad to restart. How to force your iPad to restart How to export Keynote for iPad presentations to your Mac or PC Exporting Keynote presentations from your iPad to your Mac or PC isn’t as straight forward as you might have expected. This tutorial shows you how. How to export Keynote for iPad presentations to your Mac or PC How to import presentations to Keynote on your iPad Having trouble getting your presentations onto your iPad? How to import presentations to Keynote on your iPad How to import documents to Pages on your iPad This guide shows you how to transfer documents (MS Word or Pages) from your Mac/PC to your iPad. How to import documents to Pages on your iPad How to insert photos in a Pages document using iPad and share it as a PDF Want to spice up that doc with a picture you just took? This tutorial will show you how – and how to export that document as a PDF. How to insert photos in a Pages document using iPad and share it as a PDF How to lock your iPad If you have kids or co-workers/friends who think it’s funny to mess with your iPad – lock it. How to lock your iPad How to remove the “Sent from my iPad” signature from outgoing email on your iPad Does everyone need to know you just sent that email from your iPad? Probably not. This guide shows you how to remove the “Sent from my iPad” signature and replace it with your own (or none). How to remove the “Sent from my iPad” signature from outgoing email on your iPad How To Sync Multiple Calendars to the iPad With Google Sync This tutorial will show you a workaround on how to sync multiple calendars on your iPad using Google Sync. How to Sync Multiple Calendars to the iPad With Google Sync How to determine the MAC address of your iPad If your network restricts connections via MAC address – this guide will show you how to determine what yours is. How to determine the MAC address of your iPad How to take a screenshot of your iPad Do you need to take a screenshot of your iPad? This quick tip shows you how to do just that. How to take a screenshot of your iPad How to delete apps from your iPod Touch, iPhone or iPad Anyone who had an iPod Touch or iPhone before they had an iPad won’t need this tutorial. But if you’re new to the experience, this one will help. How to delete apps from your iPod Touch, iPhone or iPad How to determine the iPad ECID on Windows and Mac iPadintosh shows us how to determine the iPad’s ECID code – something you’ll want to have come Jailbreak time. How to grab the iPad ECID in Windows or OS X iPad Apps: Twitter and social networking essentials Enggadget has you covered with reviews of the first slew of iPad specific Twitter and other social networking apps. iPad Apps: Twitter and social networking essentials What does your website look like on an iPad? iPad Peek is a web based tool that allows you to enter any given URL, and it will display that page the same way Safari on the iPad does. Great for web site owners who don’t have access to an iPad. iPadPeek Stream Music and Videos to your iPad Gizmodo reviews the iPad app StreamToMe, which allows you to stream media from your Mac to your iPad across your local network. Their feelings in a nutshell – worth the $3, but not perfect. Review: StreamToMe for the iPad Apple iPad : Change links in Google Reader to point to full HTML webpage How to change links in Safari for iPad so that Google Reader points to a full HTML webpage How to connect an iPad to your existing wireless keyboard This video will show you how to connect your iPad to a wireless keyboard if you’re having any problems – and from the sound of things, quite a few folks are. via TUAW How to get started with the iPad Mashable has a very entry-level guide that will help you set up your iPad for the first time. Mashable’s Guide to Setting up the iPad Essential iPad Apps Downloadsquad gives mini-reviews to 8 iPad apps that you should install as soon as you get your iPad. iPad App Buyers Guide: Essential Apps you should get on day one Videos: The Official iPad Guided Tours From none other than Apple! Great getting started videos for all the included iPad apps. The Official iPad Guided Tours The Official iPad Manual When you buy an iPad, you don’t get a manual. But that’s not to say there isn’t one. Apple provides a 150 guide for your iPad in PDF format. The Official iPad Manual (pdf) How to print from your iPad Sure, it’s actually just an App (PrintCentral – $9.99 USD), but as of right now, it’s the only way. PrintCentral How to make your own iPad Wallpaper A perfectly detailed tutorial on how to make your own wallpaper for your iPad. The author also provides a really nice sample wallpaper, published under the Attribution-Noncommercial 2.0 Generic license. How to make your own iPad Wallpaper Got any more tips? Share them in the comments, and we’ll update the post with the links, or just the tip itself. Similar Articles Productive Geek Tips Want an iPad? How-To Geek is Giving One Away!Why Wait? Amazing New Add-on Turns Your iPhone into an iPad! [Comic]Clear the Auto-Complete Email Address Cache in OutlookAsk the Readers: Share Your Tips for Defeating Viruses and MalwareStupid Geek Tricks: Tile or Cascade Multiple Windows in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos

    Read the article

  • Consume WCF Service InProcess using Agatha and WCF

    - by REA_ANDREW
    I have been looking into this lately for a specific reason.  Some integration tests I want to write I want to control the types of instances which are used inside the service layer but I want that control from the test class instance.  One of the problems with just referencing the service is that a lot of the time this will by default be done inside a different process.  I am using StructureMap as my DI of choice and one of the tools which I am using inline with RhinoMocks is StructureMap.AutoMocking.  With StructureMap the main entry point is the ObjectFactory.  This will be process specific so if I decide that the I want a certain instance of a type to be used inside the ServiceLayer I cannot configure the ObjectFactory from my test class as that will only apply to the process which it belongs to. This is were I started thinking about two things: Running a WCF in process Being able to share mocked instances across processes A colleague in work pointed me to a project which is for the latter but I thought that it would be a better solution if I could run the WCF Service in process.  One of the projects which I use when I think about WCF Services is AGATHA, and the one which I have to used to try and get my head around doing this. Another asset I have is a book called Programming WCF Services by Juval Lowy and if you have not heard of it or read it I would definately recommend it.  One of the many topics that is inside this book is the type of configuration you need to communicate with a service in the same process, and it turns out to be quite simple from a config point of view. <system.serviceModel> <services> <service name="Agatha.ServiceLayer.WCF.WcfRequestProcessor"> <endpoint address ="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </service> </services> <client> <endpoint name="MyEndpoint" address="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </client> </system.serviceModel>   You can see here that I am referencing the Agatha object and contract here, but also that my binding and the address is something called Named Pipes.  THis is sort of the “Magic” which makes it happen in the same process. Next I need to open the service prior to calling the methods on a proxy which I also need.  My initial attempt at the proxy did not use any Agatha specific coding and one of the pains I found was that you obviously need to give your proxy the known types which the serializer can be aware of.  So we need to add to the known types of the proxy programmatically.  I came across the following blog post which showed me how easy it was http://bloggingabout.net/blogs/vagif/archive/2009/05/18/how-to-programmatically-define-known-types-in-wcf.aspx. First Pass So with this in mind, and inside a console app this was my first pass at consuming a service in process.  First here is the proxy which I made making use of the Agatha IWcfRequestProcessor contract. public class InProcProxy : ClientBase<Agatha.Common.WCF.IWcfRequestProcessor>, Agatha.Common.WCF.IWcfRequestProcessor { public InProcProxy() { } public InProcProxy(string configurationName) : base(configurationName) { } public Agatha.Common.Response[] Process(params Agatha.Common.Request[] requests) { return Channel.Process(requests); } public void ProcessOneWayRequests(params Agatha.Common.OneWayRequest[] requests) { Channel.ProcessOneWayRequests(requests); } } So with the proxy in place I could then use this after opening the service so here is the code which I use inside the console app make the request. static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new InProcProxy()) { foreach (var operation in proxy.Endpoint.Contract.Operations) { foreach (var t in KnownTypeProvider.GetKnownTypes(null)) { operation.KnownTypes.Add(t); } } var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); } So what I used here is the KnownTypeProvider of Agatha to easily get all the types I need for the service/proxy and add them to the proxy.  My Request handler for this was just a test one which always returned 2 products. public class GetProductsHandler : RequestHandler<GetProductsRequest,GetProductsResponse> { public override Agatha.Common.Response Handle(GetProductsRequest request) { return new GetProductsResponse { Products = new List<ProductDto> { new ProductDto{}, new ProductDto{} } }; } } Second Pass Now after I did this I started reading up some more on some resources including more by Davy Brion and others on Agatha.  Now it turns out that the work I did above to create a derived class of the ClientBase implementing Agatha.Common.WCF.IWcfRequestProcessor was not necessary due to a nice class which is present inside the Agatha code base, RequestProcessorProxy which takes care of this for you! :-) So disregarding that class I made for the proxy and changing my code to use it I am now left with the following: static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new RequestProcessorProxy()) { var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); }   Cheers for now, Andy References Agatha WCF InProcess Without WCF StructureMap.AutoMocking Cross Process Mocking Agatha Programming WCF Services by Juval Lowy

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • TFS 2010 Basic Concepts

    - by jehan
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here, I’m going to discuss some key Architectural changes and concepts that have taken place in TFS 2010 when compared to TFS 2008. In TFS 2010 Installation, First you need to do the Installation and then you have to configure the Installation Feature from the available features. This is bit similar to SharePoint Installation, where you will first do the Installation and then configure the SharePoint Farms. 1) Installation Features available in TFS2010: a) Basic: It is the most compact TFS installation possible. It will install and configure Source Control, Work Item tracking and Build Services only. (SharePoint and Reporting Integration will not be possible). b) Standard Single Server: This is suitable for Single Server deployment of TFS. It will install and configure Windows SharePoint Services for you and will use the default instance of SQL Server. c) Advanced: It is suitable, if you want use Remote Servers for SQL Server Databases, SharePoint Products and Technologies and SQL Server Reporting Services. d) Application Tier Only: If you want to configure high availability for Team Foundation Server in a Load Balanced Environment (NLB) or you want to move Team Foundation Server from one server to other or you want to restore TFS. e) Upgrade: If you want to upgrade from a prior version of TFS. Note: One more important thing to know here about  TFS 2010 Basic is that,  it can be installed on Client Operations Systems(Windows 7 and Windows Vista SP3), Where as  earlier you cannot Install previous version of TFS (2008 and 2005) on client OS. 2) Team Project Collections: Connect to TFS dialog box in TFS 2008:  In TFS 2008, the TFS Server contains a set of Team Projects and each project may or may not be independent of other projects and every checkin gets a ever increasing  changeset ID  irrespective of the team project in which it is checked in and the same applies to work items  also, who also gets unique Work Item Ids.The main problem with this approach was that there are certain things which were impossible to do; those were required as per the Application Development Process. a)      If something has gone wrong in one team project and now you want to restore it back to earlier state where it was working properly then it requires you to restore the Database of Team Foundation Server from the backup you have taken as per your Maintenance plans and because of this the other team projects may lose out on the work which is not backed up. b)       Your company had a merge with some other company and now you have two TFS servers. One TFS Server which you are working on and other TFS server which other company was working and now after the merge you want to integrate the team projects from two TFS servers into one, which is almost impossible to achieve in TFS 2008. Though you can create the Team Projects in one server manually (In Source Control) which you want to integrate from the other TFS Server, but will lose out on History of Change Sets and Work items and others which are very important. There were few more issues of this sort, which were difficult to resolve in TFS 2008. To resolve issues related to above kind of scenarios which were mainly related TFS Maintenance, Integration, migration and Security,  Microsoft has come up with Team Project Collections concept in TFS 2010.This concept is similar to SharePoint Site Collections and if you are familiar with SharePoint Architecture, then it will help you to understand TFS 2010 Architecture easily. Connect to TFS dialog box in TFS 2010: In above dialog box as you can see there are two Team Project Collections, each team project can contain any number of team projects as you can see on right side it shows the two Team Projects in Team Project Collection (Default Collection) which I have chosen. Note: You can connect to only one Team project Collection at a time using an instance of  TFS Team Explorer. How does it work? To introduce Team Project Collections, changes have been done in reorganization of TFS databases. TFS 2008 was composed of 5-7 databases partitioned by subsystem (each for Version Control, Work Item Tracking, Build, Integration, Project Management...) New TFS 2010 database architecture: TFS_Config: It’s the root database and it contains centralized TFS configuration data, including the list of all team projects exist in TFS server. TFS_Warehouse: The data warehouse contains all the reporting data of served by this server (farm). TFS_* : This contains individual team project collection data. This database contains all the operational data of team project collection regardless of subsystem.In additional to this, you will have databases for SharePoint and Report Server. 3) TFS Farms:  As TFS 2010 is more flexible to configure as multiple Application tiers and multiple Database tiers, so it will be more appropriate to call as TFS Farm if you going for multi server installation of TFS. NLB support for TFS application tiers – With TFS 2010: you can configure multiple TFS application tier machines to serve the same set of Team Project Collections. The primary purpose of NLB support is to enable a cleaner and more complete high availability than in TFS 2008. Even if any application tier in the farm fails then farm will automatically continue to work with hardly any indication to end users of a problem. SQL data tiers: With 2010 you can configure many SQL Servers. Each Database can be configured to be on any SQL Server because each Team Project Collection is an independent database. This feature can also be used to load balance databases across SQL Servers.These new capabilities will significantly change the way enterprises manage their TFS installations in the future. With Team Project Collections and TFS farms, you can create a single, arbitrarily large TFS installation. You can grow it incrementally by adding ATs and SQL Servers as needed.

    Read the article

  • Replacing jQuery.live() with jQuery.on()

    - by Rick Strahl
    jQuery 1.9 and 1.10 have introduced a host of changes, but for the most part these changes are mostly transparent to existing application usage of jQuery. After spending some time last week with a few of my projects and going through them with a specific eye for jQuery failures I found that for the most part there wasn't a big issue. The vast majority of code continues to run just fine with either 1.9 or 1.10 (which are supposed to be in sync but with 1.10 removing support for legacy Internet Explorer pre-9.0 versions). However, one particular change in the new versions has caused me quite a bit of update trouble, is the removal of the jQuery.live() function. This is my own fault I suppose - .live() has been deprecated for a while, but with 1.9 and later it was finally removed altogether from jQuery. In the past I had quite a bit of jQuery code that used .live() and it's one of the things that's holding back my upgrade process, although I'm slowly cleaning up my code and switching to the .on() function as the replacement. jQuery.live() jQuery.live() was introduced a long time ago to simplify handling events on matched elements that exist currently on the document and those that are are added in the future and also match the selector. jQuery uses event bubbling, special event binding, plus some magic using meta data attached to a parent level element to check and see if the original target event element matches the selected selected elements (for more info see Elijah Manor's comment below). An Example Assume a list of items like the following in HTML for example and further assume that the items in this list can be appended to at a later point. In this app there's a smallish initial list that loads to start, and as the user scrolls towards the end of the initial small list more items are loaded dynamically and added to the list.<div id="PostItemContainer" class="scrollbox"> <div class="postitem" data-id="4z6qhomm"> <div class="post-icon"></div> <div class="postitemheader"><a href="show/4z6qhomm" target="Content">1999 Buick Century For Sale!</a></div> <div class="postitemprice rightalign">$ 3,500 O.B.O.</div> <div class="smalltext leftalign">Jun. 07 @ 1:06am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> <div class="postitem" data-id="2jtvuu17"> <div class="postitemheader"><a href="show/2jtvuu17" target="Content">Toyota VAN 1987</a></div> <div class="postitemprice rightalign">$950</div> <div class="smalltext leftalign">Jun. 07 @ 12:29am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> … </div> With the jQuery.live() function you could easily select elements and hook up a click handler like this:$(".postitem").live("click", function() {...}); Simple and perfectly readable. The behavior of the .live handler generally was the same as the corresponding simple event handlers like .click(), except that you have to explicitly name the event instead of using one of the methods. Re-writing with jQuery.on() With .live() removed in 1.9 and later we have to re-write .live() code above with an alternative. The jQuery documentation points you at the .on() or .delegate() functions to update your code. jQuery.on() is a more generic event handler function, and it's what jQuery uses internally to map the high level event functions like .click(),.change() etc. that jQuery exposes. Using jQuery.on() however is not a one to one replacement of the .live() function. While .on() can handle events directly and use the same syntax as .live() did, you'll find if you simply switch out .live() with .on() that events on not-yet existing elements will not fire. IOW, the key feature of .live() is not working. You can use .on() to get the desired effect however, but you have to change the syntax to explicitly handle the event you're interested in on the container and then provide a filter selector to specify which elements you are actually interested in for handling the event for. Sounds more complicated than it is and it's easier to see with an example. For the list above hooking .postitem clicks, using jQuery.on() looks like this:$("#PostItemContainer").on("click", ".postitem", function() {...}); You specify a container that can handle the .click event and then provide a filter selector to find the child elements that trigger the  the actual event. So here #PostItemContainer contains many .postitems, whose click events I want to handle. Any container will do including document, but I tend to use the container closest to the elements I actually want to handle the events on to minimize the event bubbling that occurs to capture the event. With this code I get the same behavior as with .live() and now as new .postitem elements are added the click events are always available. Sweet. Here's the full event signature for the .on() function: .on( events [, selector ] [, data ], handler(eventObject) ) Note that the selector is optional - if you omit it you essentially create a simple event handler that handles the event directly on the selected object. The filter/child selector required if you want life-like - uh, .live() like behavior to happen. While it's a bit more verbose than what .live() did, .on() provides the same functionality by being more explicit on what your parent container for trapping events is. .on() is good Practice even for ordinary static Element Lists As a side note, it's a good practice to use jQuery.on() or jQuery.delegate() for events in most cases anyway, using this 'container event trapping' syntax. That's because rather than requiring lots of event handlers on each of the child elements (.postitem in the sample above), there's just one event handler on the container, and only when clicked does jQuery drill down to find the matching filter element and tries to match it to the originating element. In the early days of jQuery I used manually build handlers that did this and manually drilled from the event object into the originalTarget to determine if it's a matching element. With later versions of jQuery the various event functions in jQuery essentially provide this functionality out of the box with functions like .on() and .delegate(). All of this is nothing new, but I thought I'd write this up because I have on a few occasions forgotten what exactly was needed to replace the many .live() function calls that litter my code - especially older code. This will be a nice reminder next time I have a memory blank on this topic. And maybe along the way I've helped one or two of you as well to clean up your .live() code…© Rick Strahl, West Wind Technologies, 2005-2013Posted in jQuery   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 proje

    - by CodeSniper
    I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008.   The Problem Error Compiling transformation: Metadata file 'dotless.Core' could not be found In “T4 speak”, this simply means that you have an Assembly directive in your T4 template but the T4 engine was not able to locate or load the referenced assembly. In the case of the T4CSS Template, this was a showstopper for making it work in Visual Studio 2010. On a side note: The T4CSS template is a sweet little wrapper to allow you to use DotLessCss to generate static .css files from .less files rather than using their default HttpHandler or command-line tool.    If you haven't tried DotLessCSS yet, go check it out now!  In short, it is a tool that allows you to templatize and program your CSS files so that you can use variables, expressions, and mixins within your CSS which enables rapid changes and a lot of developer-flexibility as you evolve your CSS and UI. Back to our regularly scheduled program… Anyhow, this post isn't about DotLessCss, its about the T4 Templates and the errors I ran into when converting them from Visual Studio 2008 to Visual Studio 2010. In VS2010, there were quite a few changes to the T4 Template Engine; most were excellent changes, but this one bit me with T4CSS: “Project assemblies are no longer used to resolve template assembly directives.” In VS2008, if you wanted to reference a custom assembly in your T4 Template (.tt file) you would simply right click on your project, choose Add Reference and select that assembly.  Afterwards you were allowed to use the following syntax in your T4 template to tell it to look at the local references: <#@ assembly name="dotless.Core.dll" #> This told the engine to look in the “usual place” for the assembly, which is your project references. However, this is exactly what they changed in VS2010.  They now basically sandbox the T4 Engine to keep your T4 assemblies separate from your project assemblies.  This can come in handy if you want to support different versions of an assembly referenced both by your T4 templates and your project. Who broke the build?  Oh, Microsoft Did! In our case, this change causes a problem since the templates are no longer compatible when upgrading to VS 2010 – thus its a breaking change.  So, how do we make this work in VS 2010? Luckily, Microsoft now offers several options for referencing assemblies from T4 Templates: GAC your assemblies and use Namespace Reference or Fully Qualified Type Name Use a hard-coded Fully Qualified UNC path Copy assembly to Visual Studio "Public Assemblies Folder" and use Namespace Reference or Fully Qualified Type Name.  Use or Define a Windows Environment Variable to build a Fully Qualified UNC path. Use a Visual Studio Macro to build a Fully Qualified UNC path. Option #1 & 2 were already supported in Visual Studio 2008, so if you want to keep your templates compatible with both Visual Studio versions, then you would have to adopt one of these approaches. Yakkety Yak, use the GAC! Option #1 requires an additional pre-build step to GAC the referenced assembly, which could be a pain.  But, if you go that route, then after you GAC, all you need is a simple type name or namespace reference such as: <#@ assembly name="dotless.Core" #> Hard Coding aint that hard! The other option of using hard-coded paths in Option #2 is pretty impractical in most situations since each developer would have to use the same local project folder paths, or modify this setting each time for their local machines as well as for production deployment.  However, if you want to go that route, simply use the following assembly directive style: <#@ assembly name="C:\Code\Lib\dotless.Core.dll" #> Lets go Public! Option #3, the Visual Studio Public Assemblies Folder, is the recommended place to put commonly used tools and libraries that are only needed for Visual Studio.  Think of it like a VS-only GAC.  This is likely the best place for something like dotLessCSS and is my preferred solution.  However, you will need to either use an installer or a pre-build action to copy the assembly to the right folder location.   Normally this is located at:  C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies Once you have copied your assembly there, you use the type name or namespace syntax again: <#@ assembly name="dotless.Core" #> Save the Environment! Option #4, using a Windows Environment Variable, is interesting for enterprise use where you may have standard locations for files, but less useful for demo-code, frameworks, and products where you don't have control over the local system.  The syntax for including a environment variable in your assembly directive looks like the following, just as you would expect: <#@ assembly name="%mypath%\dotless.Core.dll" #> “mypath” is a Windows environment variable you setup that points to some fully qualified UNC path on your system.  In the right situation this can be a great solution such as one where you use a msi installer for deployment, or where you have a pre-existing environment variable you can re-use. OMG Macros! Finally, Option #5 is a very nice option if you want to keep your T4 template’s assembly reference local and relative to the project or solution without muddying-up your dev environment or GAC with extra deployments.  An example looks like this: <#@ assembly name="$(SolutionDir)lib\dotless.Core.dll" #> In this example, I’m using the “SolutionDir” VS macro so I can reference an assembly in a “/lib” folder at the root of the solution.   This is just one of the many macros you can use.  If you are familiar with creating Pre/Post-build Event scripts, you can use its dialog to look at all of the different VS macros available. This option gives the best solution for local assemblies without the hassle of extra installers or other setup before the build.   However, its still not compatible with Visual Studio 2008, so if you have a T4 Template you want to use with both, then you may have to create multiple .tt files, one for each IDE version, or require the developer to set a value in the .tt file manually.   I’m not sure if T4 Templates support any form of compiler switches like “#if (VS2010)”  statements, but it would definitely be nice in this case to switch between this option and one of the ones more compatible with VS 2008. Conclusion As you can see, we went from 3 options with Visual Studio 2008, to 5 options (plus one problem) with Visual Studio 2010.  As a whole, I think the changes are great, but the short-term growing pains during the migration may be annoying until we get used to our new found power. Hopefully this all made sense and was helpful to you.  If nothing else, I’ll just use it as a reference the next time I need to port a T4 template to Visual Studio 2010.  Happy T4 templating, and “May the fourth be with you!”

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >