Search Results

Search found 21268 results on 851 pages for 'null route'.

Page 535/851 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • DataContractSerializer truncated string when used with MemoryStream,but works with StringWriter

    - by Michael Freidgeim
    We've used the following DataContractSerializeToXml method for a long time, but recently noticed, that it doesn't return full XML for a long object, but  truncated it and returns XML string with the length of  multiple-of-1024 , but the reminder is not included. internal static string DataContractSerializeToXml<T>(T obj) { string strXml = ""; Type type= obj.GetType();//typeof(T) DataContractSerializer serializer = new DataContractSerializer(type); System.IO.MemoryStream aMemStr = new System.IO.MemoryStream(); System.Xml.XmlTextWriter writer = new System.Xml.XmlTextWriter(aMemStr, null); serializer.WriteObject(writer, obj); strXml = System.Text.Encoding.UTF8.GetString(aMemStr.ToArray()); return strXml; }   I tried to debug and searched Google for similar problems, but didn't find explanation of the error. The most closed http://forums.codeguru.com/showthread.php?309479-MemoryStream-allocates-size-multiple-of-1024-( talking about incorrect length, but not about truncated string.fortunately replacing MemoryStream to StringWriter according to http://billrob.com/archive/2010/02/09/datacontractserializer-converting-objects-to-xml-string.aspxfixed the issue.   1: var serializer = new DataContractSerializer(tempData.GetType());   2: using (var backing = new System.IO.StringWriter())   3: using (var writer = new System.Xml.XmlTextWriter(backing))   4: {   5:     serializer.WriteObject(writer, tempData);   6:     data.XmlData = backing.ToString();   7: }v

    Read the article

  • Ubuntu server 11.04 recognize only 1 core instead of 4

    - by Kreker
    I searched for other questions and googled a lot but I don't find a solution for solving this problem. Ubuntu Server 11.04 64bit installed on Dell Poweredge with Intel Xeon X5450. He only recognize 1 of the 4 cores I have. Tried to modify the GRUB config but didn't work. IN the machine BIOS I didn't find anything useful. CPU root@darwin:~# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU X5450 @ 3.00GHz stepping : 10 cpu MHz : 2992.180 cache size : 6144 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 xsave lahf_lm dts tpr_shadow vnmi flexpriority bogomips : 5984.36 clflush size : 64 cache_alignment : 64 address sizes : 38 bits physical, 48 bits virtual power management: GRUB root@darwin:~# cat /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=2 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX="noapic nolapic" #was with acpi=off # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" Complete dmesg Too long, posted on pastebin http://pastebin.com/bsKPBhzu

    Read the article

  • Is it considered poor programming to do this with xna components?

    - by Rob
    I created my own Menu System that is event driven. In order to have a loading screen and multithreaded loading to work, I devised this sort of implementation: //Let's check if the game is done loading. if (_game != null) { _gameLoaded = _game.DoneLoading; } //This means the game is loading still, //therefore the loading screen should be active. if (!_gameLoaded && _gameActive) { _gameScreenList[2].UpdateMenu(); } //The loading screen was selected. if (_gameScreenList[2].CurrentState == GameScreen.State.Shown && !_gameActive) { Components.Add(_game = new ParadoxGame(this)); _game.Initialize(); //Initializes the Game so that the loading can begin. _gameActive = true; } In the XNA Game Component that contains the actual game, in the LoadContent method I simply created a new Thread that calls another method ThreadLoad that has all the actual loading. I also have a boolean variable called DoneLoading in the XNA Game Component that is set to true at the end of the ThreadLoad. I am wondering if this is a poor implementation.

    Read the article

  • How to calculate square root in PHP [explained] [on hold]

    - by Enes Imsirovic
    At first code ! Don't forget embed the JQuery ! <html> <head> <title>Simple jQuery and PHP Square Root example</title> <script src="js/jquery-1.10.1.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { $('#form').submit(function(){ var number = $('#number').val(); $.ajax({type:"post",url:"calculate.php",data:"number=" +number,success:function(msg){$('#result').hide(); $("#result").html("<h3>" + msg + "</h3>").fadeIn("slow"); } }); return false; }); }); </script> </head> <body> <form id="form" action="calculate.php" method="post"> Enter number: <input id="number" type="text" name="number" /> <input id="submit" type="submit" value="Calculate Square Root" name="submit"/> </form> <p id="result"></p> </body> </html> Second code witch would be connected with first : calculate.php <?php if($_POST['number']==null){ echo "Please Enter a Number"; }else { if (!is_numeric($_POST['number'])) { echo "Please enter only numbers"; }else{ echo "Square Root of " .$_POST['number'] ." is ".sqrt($_POST['number']); } } ?> Chiefly for begginers, to see the power of PHP :) xD Load this on your localhost.. PHP files and JS : https://mega.co.nz/#!Et8zWSBb!KX2PFxa2Pzw_l-wi6QU8xi_eKTlHbtQuBsT_DvXrifk At least it look like this : http://imgur.com/vNnDRQ3

    Read the article

  • Various problems with software raid1 array built with Samsung 840 Pro SSDs

    - by Andy B
    I am bringing to ServerFault a problem that is tormenting me for 6+ months. I have a CentOS 6 (64bit) server with an md software raid-1 array with 2 x Samsung 840 Pro SSDs (512GB). Problems: Serious write speed problems: root [~]# time dd if=arch.tar.gz of=test4 bs=2M oflag=sync 146+1 records in 146+1 records out 307191761 bytes (307 MB) copied, 23.6788 s, 13.0 MB/s real 0m23.680s user 0m0.000s sys 0m0.932s When doing the above (or any other larger copy) the load spikes to unbelievable values (even over 100) going up from ~ 1. When doing the above I've also noticed very weird iostat results: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1589.50 0.00 54.00 0.00 13148.00 243.48 0.60 11.17 0.46 2.50 sdb 0.00 1627.50 0.00 16.50 0.00 9524.00 577.21 144.25 1439.33 60.61 100.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 1602.00 0.00 12816.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 And it keeps it this way until it actually writes the file to the device (out from swap/cache/memory). The problem is that the second SSD in the array has svctm and await roughly 100 times larger than the second. For some reason the wear is different between the 2 members of the array root [~]# smartctl --attributes /dev/sda | grep -i wear 177 Wear_Leveling_Count 0x0013 094% 094 000 Pre-fail Always - 180 root [~]# smartctl --attributes /dev/sdb | grep -i wear 177 Wear_Leveling_Count 0x0013 070% 070 000 Pre-fail Always - 1005 The first SSD has a wear of 6% while the second SSD has a wear of 30%!! It's like the second SSD in the array works at least 5 times as hard as the first one as proven by the first iteration of iostat (the averages since reboot): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 10.44 51.06 790.39 125.41 8803.98 1633.11 11.40 0.33 0.37 0.06 5.64 sdb 9.53 58.35 322.37 118.11 4835.59 1633.11 14.69 0.33 0.76 0.29 12.97 md1 0.00 0.00 1.88 1.33 15.07 10.68 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 1109.02 173.12 10881.59 1620.39 9.75 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.41 0.01 3.10 0.02 7.42 0.00 0.00 0.00 0.00 What I've tried: I've updated the firmware to DXM05B0Q (following reports of dramatic improvements for 840Ps after this update). I have looked for "hard resetting link" in dmesg to check for cable/backplane issues but nothing. I have checked the alignment and I believe they are aligned correctly (1MB boundary, listing below) I have checked /proc/mdstat and the array is Optimal (second listing below). root [~]# fdisk -ul /dev/sda Disk /dev/sda: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026d59 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sda2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sda3 4605952 814106623 404750336 fd Linux raid autodetect root [~]# fdisk -ul /dev/sdb Disk /dev/sdb: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dede Device Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sdb2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sdb3 4605952 814106623 404750336 fd Linux raid autodetect /proc/mdstat root # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204736 blocks super 1.0 [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 404750144 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 2096064 blocks super 1.1 [2/2] [UU] unused devices: Running a read test with hdparm root [~]# hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 664 MB in 3.00 seconds = 221.33 MB/sec root [~]# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 288 MB in 3.01 seconds = 95.77 MB/sec But look what happens if I add --direct root [~]# hdparm --direct -t /dev/sda /dev/sda: Timing O_DIRECT disk reads: 788 MB in 3.01 seconds = 262.08 MB/sec root [~]# hdparm --direct -t /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 534 MB in 3.02 seconds = 176.90 MB/sec Both tests increase but /dev/sdb doubles while /dev/sda increases maybe 20%. I just don't know what to make of this. As suggested by Mr. Wagner I've done another read test with dd this time and it confirms the hdparm test: root [/home2]# dd if=/dev/sda of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 38.0855 s, 282 MB/s root [/home2]# dd if=/dev/sdb of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 115.24 s, 93.2 MB/s So sda is 3 times faster than sdb. Or maybe sdb is doing also something else besides what sda does. Is there some way to find out if sdb is doing more than what sda does? UPDATE Again, as suggested by Mr. Wagner, I have swapped the 2 SSDs. And as he thought it would happen, the problem moved from sdb to sda. So I guess I'll RMA one of the SSDs. I wonder if the cage might be problematic. What is wrong with this array? Please help!

    Read the article

  • Dealing with the node.js callback pyramid

    - by thecoop
    I've just started using node, and one thing I've quickly noticed is how quickly callbacks can build up to a silly level of indentation: doStuff(arg1, arg2, function(err, result) { doMoreStuff(arg3, arg4, function(err, result) { doEvenMoreStuff(arg5, arg6, function(err, result) { omgHowDidIGetHere(); }); }); }); The official style guide says to put each callback in a separate function, but that seems overly restrictive on the use of closures, and making a single object declared in the top level available several layers down, as the object has to be passed through all the intermediate callbacks. Is it ok to use function scope to help here? Put all the callback functions that need access to a global-ish object inside a function that declares that object, so it goes into a closure? function topLevelFunction(globalishObject, callback) { function doMoreStuffImpl(err, result) { doMoreStuff(arg5, arg6, function(err, result) { callback(null, globalishObject); }); } doStuff(arg1, arg2, doMoreStuffImpl); } and so on for several more layers... Or are there frameworks etc to help reduce the levels of indentation without declaring a named function for every single callback? How do you deal with the callback pyramid?

    Read the article

  • Query something and return the reason if nothing has been found

    - by Daniel Hilgarth
    Assume I have a Query - as in CQS that is supposed to return a single value. Let's assume that the case that no value is found is not exceptional, so no exception will be thrown in this case. Instead, null is returned. However, if no value has been found, I need to act according to the reason why no value has been found. Assuming that the Query knows the reason, how would I communicate it to the caller of the Query? A simple solution would be not return the value directly but a container object that contains the value and the reason: public class QueryResult { public TValue Value { get; private set; } public TReason ReasonForNoValue { get; private set; } } But that feels clumsy, because if a value is found, ReasonForNoValue makes no sense and if no value has been found, Value makes no sense. What other options do I have to communicate the reason? What do you think of one event per reason? For reference: This is going to be implemented in C#.

    Read the article

  • Simple Interactive Search with jQuery and ASP.Net MVC

    - by Doug Lampe
    Google now has a feature where the search updates as you type in the search box.  You can implement the same feature in your MVC site with a little jQuery and MVC Ajax.  Here's how: Create a javascript global variable to hold the previous value of your search box. Use setTimeout to check to see if the search box has changed after some interval.  (Note: Don't use setInterval since we don't want to have to turn the timer off while Ajax is processing.) Submit the form if the value is changed. Set the update target to display your results. Set the on success callback to "start" the timer again.  This, along with step 2 above will make sure that you don't sent multipe requests until the initial request has finished processing. Here is the code: <script type="text/javascript"> var searchValue = $('#Search').val(); $(function () {     setTimeout(checkSearchChanged, 0.1); }); function checkSearchChanged() {     var currentValue = $('#Search').val();     if ((currentValue) && currentValue != searchValue && currentValue != '') {         searchValue = $('#Search').val();         $('#submit').click();     }     else {         setTimeout(checkSearchChanged, 0.1);     } } </script> <h2>Search</h2> <% using (Ajax.BeginForm("SearchResults", new AjaxOptions { UpdateTargetId = "searchResults", OnSuccess = "checkSearchChanged" })) { %>     Search: <%   = Html.TextBox("Search", null, new { @class = "wide" })%><input id="submit" type="submit" value="Search" /> <% } %> <div id="searchResults"></div> That's it!

    Read the article

  • Trouble using Ray.Intersect method on bounding boxes in a 2D XNA game

    - by getsauce
    I am trying to use a ray and bounding box to determine if a box is between the player and the mouse pointer in 2D space. When I try testing the code, the collision will return true when pointed at the box but it also returns true under other circumstances where it shouldn't. For instance. If I have a player on the left and a box directly to the right, I can put the mouse pointer a few hundred pixels above the box or a few hundred below and it will still return true. Also, I can put my mouse pointer to the left of the player and in a certain area it will still return true. Does anyone have any idea what might cause this? I have left out definitions for some of my members and properties just to make this code sample easier to read. The position property is just a Vector2 for where each object is located. ray = new Ray(new Vector3(player.Position, 0), new Vector3(mouse.Position, 0); box = new BoundingBox(new Vector3(box.Position, 0), new Vector3( new Vector2(box.Position + box.Width, box.Position + box.Height), 0); if (ray.Intersects(box) != null) collision = true; else collision = false;

    Read the article

  • What is the value to checking in broken unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in broken unit tests? I will use a simple example. Case sensitivity. The current code is Case Sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests has found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • A Star Path finding endless loop

    - by PoeHaH
    I have implemented A* algorithm. Sometimes it works, sometimes it doesn't, and it goes through an endless loop. After days of debugging and googling, I hope you can come to the rescue. This is my code: The algorythm: public ArrayList<Coordinate> findClosestPathTo(Coordinate start, Coordinate goal) { ArrayList<Coordinate> closed = new ArrayList<Coordinate>(); ArrayList<Coordinate> open = new ArrayList<Coordinate>(); ArrayList<Coordinate> travelpath = new ArrayList<Coordinate>(); open.add(start); while(open.size()>0) { Coordinate current = searchCoordinateWithLowestF(open); if(current.equals(goal)) { return travelpath; } travelpath.add(current); open.remove(current); closed.add(current); ArrayList<Coordinate> neighbors = current.calculateCoordAdjacencies(true, rowbound, colbound); for(Coordinate n:neighbors) { if(closed.contains(n) || map.isWalkeable(n)) { continue; } int gScore = current.getGvalue() + 1; boolean gScoreIsBest = false; if(!open.contains(n)) { gScoreIsBest = true; n.setHvalue(manhattanHeuristic(n,goal)); open.add(n); } else { if(gScore<n.getGvalue()) { gScoreIsBest = true; } } if(gScoreIsBest) { n.setGvalue(gScore); n.setFvalue(n.getGvalue()+n.getHvalue()); } } } return null; } What I have found out is that it always fails whenever there's an obstacle in the path. If I'm running it on 'open terrain', it seems to work. It seems to be affected by this part: || map.isWalkeable(n) Though, the isWalkeable function seems to work fine. If additional code is needed, I will provide it. Your help is greatly appreciated, Thanks :)

    Read the article

  • Can't figure out how to make Slitaz USB persistent

    - by Dennis Hodapp
    I installed Slitaz on my USB. However I can't figure out how to make it persistent automatically. There are different sources telling me different ways to make it persistent. One told me to add "slitaz home=usb" to the syslinux.cfg file like this: append initrd=/boot/rootfs.gz rw root=/dev/null vga=normal autologin slitaz home=usb but it didn't work for me. http://www.slitaz.org/en/doc/handbook/liveusb.html gave an example of how to do it manually but I didn't try it and I also want it to happen automatically. custompc.co.uk/features/602451/make-any-pc-your-own-with-linux-on-a-usb-key.html is an older article that also explains how to make the USB persistent but I don't want to try it cause it looks outdated (from 2008) does anyone know the best way to make the USB automatically persistent?

    Read the article

  • How do I mount a CIFS share via FSTAB and give full RW to Guest

    - by Kendor
    I want to create a Public folder that has full RW access. The problem with my configuration is that Windows users have no issues as guests (they can RW and Delete), my Ubuntu client can't do the same. We can only write and read, but not create or delete. Here is the my smb.conf from my server: [global] workgroup = WORKGROUP netbios name = FILESERVER server string = TurnKey FileServer os level = 20 security = user map to guest = Bad Password passdb backend = tdbsam null passwords = yes admin users = root encrypt passwords = true obey pam restrictions = yes pam password change = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . add user script = /usr/sbin/useradd -m '%u' -g users -G users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' guest account = nobody syslog = 0 log file = /var/log/samba/samba.log max log size = 1000 wins support = yes dns proxy = no socket options = TCP_NODELAY panic action = /usr/share/samba/panic-action %d [homes] comment = Home Directory browseable = no read only = no valid users = %S [storage] create mask = 0777 directory mask = 0777 browseable = yes comment = Public Share writeable = yes public = yes path = /srv/storage The following FSTAB entry doesn't yield full R/W access to the share. //192.168.0.5/storage /media/myname/TK-Public/ cifs rw 0 0 This doesn't work either //192.168.0.5/storage /media/myname/TK-Public/ cifs rw,guest,iocharset=utf8,file_mode=0777,dir_mode=0777,noperm 0 0 Using the following location in Nemo/Nautilus w/o the Share being mounted does work: smb://192.168.0.5/storage/ Extra info. I just noticed that if I copy a file to the share after mounting, my Ubuntu client immediately make "nobody" be the owner, and the group "no group" has read and write, with everyone else as read-only. What am I doing wrong?

    Read the article

  • Confirm that two filesystems are identical, ignoring special files

    - by endolith
    /media/A and /media/B should be identical, but I want to confirm before deleting one. Duplicate file finders don't work, because they'll find two copies of the same file within B, for instance. I only want to confirm that every file in one is identical to the other. diff -qr /media/A/ /media/B/ seems to work, but the output is cluttered with garbage like diff: /media/A//etc/alternatives/ControlPanel: No such file or directory and File /media/A//dev/tty8 is a character special file while file /media/B//dev/tty8 is a character special file I can suppress the former with 2> /dev/null, but I don't know about the latter. rsync -avn /media/A/ /media/B/ also produces a bunch of clutter, like "skipping non-regular file". How can I compare the two trees and just make sure that all the real files exist in both and are identical?

    Read the article

  • Umbraco on Windows 7 64-bit

    - by HeavyWave
    I'm trying to install Umbraco CMS on Windows 7 64-bit and I get the following exception: [HttpException (0x80004005): Could not load file or assembly 'ImageManipulation, Version=1.0.2105.41209, Culture=neutral, PublicKeyToken=null' or one of its dependencies. Failed to grant minimum permission required. The application pool's trust mode is set to 'Full', all the user permissions are just as on other sites hosted on the same machine. I went through all relevant topics on Umbraco's forum, but all advices are about the trust level. How do I fix this?

    Read the article

  • keyPressed is not working after adding ActionListener to JButton

    - by Yehonatan
    I have a serious problem while trying to build a menu for my game. I've added two JButton to a main JPanel and added an ActionListener for each of them. The main JPanel also contains the game JPanel which have the keyPressed method inside keyController. That's how it looks - Main -       JPanel -         JButton, JButton,         JPanel which contains the game and keyPressed function inside KeyController class which worked fine before I added the ActionListener for JButton. For some reason after I added an ActionListener for each of the button, the game JPanel is not getting any keyPreseed events nor KeyRealesed. Does anyone know the solution for my situation? Thank you very much! Main window - Scanner in = new Scanner(System.in); JFrame f = new JFrame("Square V.S Circles"); f.setUndecorated(true); f.setResizable(false); f.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); f.add(new JPanelHandler()); f.pack(); f.setVisible(true); f.setLocationRelativeTo(null); JPanelHandler(main JPanel) - super.setFocusable(true); JButton mybutton = new JButton("Quit"); JButton sayhi = new JButton("Say hi"); sayhi.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { System.out.println("Hi"); } }); mybutton.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { System.exit(0); } }); add(mybutton); add(sayhi); add(new Board(2)); Board KeyController(The code inside is working so it's unnecessary to put it here) - private class KeyController extends KeyAdapter { public KeyController() { ..Code } @Override public void keyPressed(KeyEvent e) { ...Code } @Override public void keyReleased(KeyEvent e){ ...Code } }

    Read the article

  • connect to a headless virtualbox instance in Linux?

    - by 130490868091234
    I've started a headless virtualbox instance with this command: VBoxManage startvm "Ensembl67VirtualMachine" --type headless Waiting for VM "Ensembl67VirtualMachine" to power on... VM "Ensembl67VirtualMachine" has been successfully started. It is set up with Remote Desktop Server Port:5555 with Authentication Method: Null and Extended Features: Allow Multiple Connections and it's now running, but I don't know how to connect to it from the same laptop where it's running. I would like to be able to have it running on a terminal. I tried this but nothing happens: rdesktop localhost:5555 ERROR: localhost: unable to connect rdesktop 192.168.1.1:5555 Any ideas?

    Read the article

  • OpenGL + Allegro. Moving from software drawing X Y to openGL is confusing

    - by Aaron
    Having a fair bit of trouble. I'm used to Allegro and drawing sprites on a bitmap buffer at X Y coords. Now I've started a test project with OpenGL and its weird. Basically, as far as I know, theirs many ways to draw stuff in OpenGL. At the moment, I think I'm creating a Quad? Whatever that is, and I think Ive given it a texture of a bitmap and them im drawing that: GLuint gl_image; bitmap = load_bitmap("cat.bmp", NULL); gl_image = allegro_gl_make_texture_ex(AGL_TEXTURE_MASKED, bitmap, GL_RGBA); glBindTexture(GL_TEXTURE_2D, gl_image); glBegin(GL_QUADS); glColor4ub(255, 255, 255, 255); glTexCoord2f(0, 0); glVertex3f(-0.5, 0.5, 0); glTexCoord2f(1, 0); glVertex3f(0.5, 0.5, 0); glTexCoord2f(1, 1); glVertex3f(0.5, -0.5, 0); glTexCoord2f(0, 1); glVertex3f(-0.5, -0.5, 0); glEnd(); So yeah. So I got a few questions: Is this the best way of drawing a sprite? Is it suitable? The big question: Can anyone help / Does anyone know any tutorials on this weird coordinate thing? If it even is that. It's vastly different from XY, but I want to learn it. I was thinking maybe I could learn how this weird positioning stuff works, and then write a function to try and translate it to X and Y coords. Thats about it. I'm still trying to figure it all out on my own but any contributions you guys can make would be greatly appreciated =D Thanks!

    Read the article

  • Automatically cycle numerous or large files to the trash

    - by minameismud
    I've been tasked with fixing a vendor's program that, under certain conditions, dumps gigs of junk files into a log directory. It ends up filling users' machines. My task is to figure out how to make it stop without any source code or additional running processes, and without making the program kasplode. In other words, I'm looking to use a feature of the file system to control the growth. One idea I had was to make a hard link from that folder to NUL, as you might with /dev/null in the linux world. However, my attempts to use the mklink program to create a junction result in a message that says Local volumes are required to complete the operation. Any ideas on how to complete the junction, or other ideas to solve the problem?

    Read the article

  • How to Implement Project Type "Copy", "Move", "Rename", and "Delete"

    - by Geertjan
    You've followed the NetBeans Project Type Tutorial and now you'd like to let the user copy, move, rename, and delete the projects conforming to your project type. When they right-click a project, they should see the relevant menu items and those menu items should provide dialogs for user interaction, followed by event handling code to deal with the current operation. Right now, at the end of the tutorial, the "Copy" and "Delete" menu items are present but disabled, while the "Move" and "Rename" menu items are absent: The NetBeans Project API provides a built-in mechanism out of the box that you can leverage for project-level "Copy", "Move", "Rename", and "Delete" actions. All the functionality is there for you to use, while all that you need to do is a bit of enablement and configuration, which is described below. To get started, read the following from the NetBeans Project API: http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/ActionProvider.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/CopyOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/MoveOrRenameOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/DeleteOperationImplementation.html Now, let's do some work. For each of the menu items we're interested in, we need to do the following: Provide enablement and invocation handling in an ActionProvider implementation. Provide appropriate OperationImplementation classes. Add the new classes to the Project Lookup. Make the Actions visible on the Project Node. Run the application and verify the Actions work as you'd like. Here we go: Create an ActionProvider. Here you specify the Actions that should be supported, the conditions under which they should be enabled, and what should happen when they're invoked, using lots of default code that lets you reuse the functionality provided by the NetBeans Project API: class CustomerActionProvider implements ActionProvider { @Override public String[] getSupportedActions() { return new String[]{ ActionProvider.COMMAND_RENAME, ActionProvider.COMMAND_MOVE, ActionProvider.COMMAND_COPY, ActionProvider.COMMAND_DELETE }; } @Override public void invokeAction(String string, Lookup lkp) throws IllegalArgumentException { if (string.equalsIgnoreCase(ActionProvider.COMMAND_RENAME)) { DefaultProjectOperations.performDefaultRenameOperation( CustomerProject.this, ""); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_MOVE)) { DefaultProjectOperations.performDefaultMoveOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_COPY)) { DefaultProjectOperations.performDefaultCopyOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_DELETE)) { DefaultProjectOperations.performDefaultDeleteOperation( CustomerProject.this); } } @Override public boolean isActionEnabled(String command, Lookup lookup) throws IllegalArgumentException { if ((command.equals(ActionProvider.COMMAND_RENAME))) { return true; } else if ((command.equals(ActionProvider.COMMAND_MOVE))) { return true; } else if ((command.equals(ActionProvider.COMMAND_COPY))) { return true; } else if ((command.equals(ActionProvider.COMMAND_DELETE))) { return true; } return false; } } Importantly, to round off this step, add "new CustomerActionProvider()" to the "getLookup" method of the project. If you were to run the application right now, all the Actions we're interested in would be enabled (if they are visible, as described in step 4 below) but when you invoke any of them you'd get an error message because each of the DefaultProjectOperations above looks in the Lookup of the Project for the presence of an implementation of a class for handling the operation. That's what we're going to do in the next step. Provide Implementations of Project Operations. For each of our operations, the NetBeans Project API lets you implement classes to handle the operation. The dialogs for interacting with the project are provided by the NetBeans project system, but what happens with the folders and files during the operation can be influenced via the operations. Below are the simplest possible implementations, i.e., here we assume we want nothing special to happen. Each of the below needs to be in the Lookup of the Project in order for the operation invocation to succeed. private final class CustomerProjectMoveOrRenameOperation implements MoveOrRenameOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyRenaming() throws IOException { } @Override public void notifyRenamed(String nueName) throws IOException { } @Override public void notifyMoving() throws IOException { } @Override public void notifyMoved(Project original, File originalPath, String nueName) throws IOException { } } private final class CustomerProjectCopyOperation implements CopyOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyCopying() throws IOException { } @Override public void notifyCopied(Project prjct, File file, String string) throws IOException { } } private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Also make sure to put the above methods into the Project Lookup. Check the Lookup of the Project. The "getLookup()" method of the project should now include the classes you created above, as shown in bold below: @Override public Lookup getLookup() { if (lkp == null) { lkp = Lookups.fixed(new Object[]{ this, new Info(), new CustomerProjectLogicalView(this), new CustomerCustomizerProvider(this), new CustomerActionProvider(), new CustomerProjectMoveOrRenameOperation(), new CustomerProjectCopyOperation(), new CustomerProjectDeleteOperation(), new ReportsSubprojectProvider(this), }); } return lkp; } Make Actions Visible on the Project Node. The NetBeans Project API gives you a number of CommonProjectActions, including for the actions we're dealing with. Make sure the items in bold below are in the "getActions" method of the project node: @Override public Action[] getActions(boolean arg0) { return new Action[]{ CommonProjectActions.newFileAction(), CommonProjectActions.copyProjectAction(), CommonProjectActions.moveProjectAction(), CommonProjectActions.renameProjectAction(), CommonProjectActions.deleteProjectAction(), CommonProjectActions.customizeProjectAction(), CommonProjectActions.closeProjectAction() }; } Run the Application. When you run the application, you should see this: Let's now try out the various actions: Copy. When you invoke the Copy action, you'll see the dialog below. Provide a new project name and location and then the copy action is performed when the Copy button is clicked below: The message you see above, in red, might not be relevant to your project type. When you right-click the application and choose Branding, you can find the string in the Resource Bundles tab, as shown below: However, note that the message will be shown in red, no matter what the text is, hence you can really only put something like a warning message there. If you have no text at all, it will also look odd.If the project has subprojects, the copy operation will not automatically copy the subprojects. Take a look here and here for similar more complex scenarios. Move. When you invoke the Move action, the dialog below is shown: Rename. The Rename Project dialog below is shown when you invoke the Rename action: I tried it and both the display name and the folder on disk are changed. Delete. When you invoke the Delete action, you'll see this dialog: The checkbox is not checkable, in the default scenario, and when the dialog above is confirmed, the project is simply closed, i.e., the node hierarchy is removed from the application. However, if you truly want to let the user delete the project on disk, pass the Project to the DeleteOperationImplementation and then add the children of the Project you want to delete to the getDataFiles method: private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { private final CustomerProject project; private CustomerProjectDeleteOperation(CustomerProject project) { this.project = project; } @Override public List<FileObject> getDataFiles() { List<FileObject> files = new ArrayList<FileObject>(); FileObject[] projectChildren = project.getProjectDirectory().getChildren(); for (FileObject fileObject : projectChildren) { addFile(project.getProjectDirectory(), fileObject.getNameExt(), files); } return files; } private void addFile(FileObject projectDirectory, String fileName, List<FileObject> result) { FileObject file = projectDirectory.getFileObject(fileName); if (file != null) { result.add(file); } } @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Now the user will be able to check the checkbox, causing the method above to be called in the DeleteOperationImplementation: Hope this answers some questions or at least gets the discussion started. Before asking questions about this topic, please take the steps above and only then attempt to apply them to your own scenario. Useful implementations to look at: http://kickjava.com/src/org/netbeans/modules/j2ee/clientproject/AppClientProjectOperations.java.htm https://kenai.com/projects/nbandroid/sources/mercurial/content/project/src/org/netbeans/modules/android/project/AndroidProjectOperations.java

    Read the article

  • Time to Get Started with Oracle Data Integrator 12c!

    - by Sandrine Riley
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} It is time to get started with Oracle Data Integrator 12c! We would like to highlight for you a great place to begin your journey with ODI.  Here you will find the Getting Started section for ODI, which provides a few options. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Step 1 – Start by downloading the Getting Started (PDF). The document provides general background information and detailed examples to help you learn how to use Oracle Data Integrator. This document guides you through a first project with Oracle Data Integrator 12c, and the instructions in this document are required for using the Getting Started Demonstration Environment. If you would like to run the Getting Started Demonstration Environment on your system go to Step 2 A. If you would like to run the Getting Started Demonstration Environment installed and pre-configured in a Virtual Image, go to Step 2 B. Step 2 - A-   A -  If you would like to run the Getting Started Demo Environment on your system, you will want to then download the Demo Environment for Getting Started (ZIP) archive containing ODI metadata, configuration scripts and sample data. B-   B -  Alternatively, a pre-configured virtual machine is available with the Getting Started installation and configuration. The Virtual Machine platform uses Oracle Virtual Box technology, and use of this configuration would necessitate the download/installation of Virtual Box and the Getting Started virtual machine. If you prefer this route and would rather run the Getting Started Demonstration Environment installed and preconfigured in a Virtual Machine, please download the ODI 12c Getting Started Virtual Machine. Step 3 – In continuation with the Getting Started Demonstration Environment and the Getting Started Guide, you will now be able to run through a first project with Oracle Data Integrator. Now that you have successfully done the exercises above, you may want to play on your own, and develop with Oracle Data Integrator on your environment...  here you can download a full version of Oracle Data Integrator 12c.  Enjoy, and happy developing! Learn more about Oracle Data Integrator; including FAQ, the Oracle by Example Series, Sample Code, and White Papers. Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • mongod fork vs nohup

    - by Daniel Kitachewsky
    I'm currently writing process management software. One package we use is mongo. Is there any difference between launching mongo with mongod --fork --logpath=/my/path/mongo.log and nohup mongod >> /my/path/mongo.log 2>&1 < /dev/null & ? My first thought was that --fork could spawn more processes and/or threads, and I was suggested that --fork could be useful for changing the effective user (downgrading privileges). But we run all under the same user (process manager and mongod), so is there any other difference? Thank you

    Read the article

  • Creating an update method in a different class

    - by Sweta Dwivedi
    I have created a class called 3D model which will animate my 3D model by changing the model position according to the values based in a .txt file through a list... Since i'm using a foreach loop to read the point values when it reaches the end of the file.. XNA throws an out of bounds exception .. (which is obvious) but if i add the same code in my Game.cs update(gameTime) method.. then i dont have this problem..Any idea how to make my 3D model update work same as the update in game.cs .. Here is the code for some idea: public void patterns(GameTime gameTime) { motion_z = new List<Point3D>(); if (pattern == 1) { f = "E:/Motion_Track-output/Output1.txt"; } if (pattern == 2) { f = "E:/Motion_Track-output/cruse.txt"; } // TODO: Add your update logic here using (StreamReader r = new StreamReader(f)) { string line; //Viewport view = graphics.GraphicsDevice.Viewport; int maxWidth = view.Width; int maxHeight = view.Height; while ((line = r.ReadLine()) != null) { string[] temp = line.Split(','); int x = (int)Math.Floor(((float.Parse(temp[0]) * 0.5f) + 0.5f) * maxWidth); int y = (int)Math.Floor(((float.Parse(temp[1]) * -0.5f) + 0.5f) * maxHeight); int z = (int)Math.Floor(((float.Parse(temp[2]) / 4 * 20000))); motion_z.Add(new Point3D(x, y, z)); } modelPosition.X = (float)(motion_z[i].X); modelPosition.Y = (float)(motion_z[i].Y); modelPosition.Z = (float)(motion_z[i].Z); i++; } //Console.WriteLine("modelposX:" + modelPosition.X + "," + "motionzX:" + motion_z[i].X); }

    Read the article

  • Dealing with the node callback pyramid

    - by thecoop
    I've just started using node, and one thing I've quickly noticed is how quickly callbacks can build up to a silly level of indentation: doStuff(arg1, arg2, function(err, result) { doMoreStuff(arg3, arg4, function(err, result) { doEvenMoreStuff(arg5, arg6, function(err, result) { omgHowDidIGetHere(); }); }); }); The official style guide says to put each callback in a separate function, but that seems overly restrictive on the use of closures, and making a single object declared in the top level available several layers down, as the object has to be passed through all the intermediate callbacks. Is it ok to use function scope to help here? Put all the callback functions that need access to a global-ish object inside a function that declares that object, so it goes into a closure? function topLevelFunction(globalishObject, callback) { function doMoreStuffImpl(err, result) { doMoreStuff(arg5, arg6, function(err, result) { callback(null, globalishObject); }); } doStuff(arg1, arg2, doMoreStuffImpl); } and so on for several more layers... Or are there frameworks etc to help reduce the levels of indentation without declaring a named function for every single callback? How do you deal with the callback pyramid?

    Read the article

  • Implementing new required feature after software release

    - by TiagoBrenck
    Fake Scenario There is a software that was released 1 year ago. The software is to map and register all kind of animals on our planet. When the software was released, the client only needed to know the scientific name of the animal, a flag if it is in risk of extinction and a scale of dangerous(that is a fake software and specification, I don't want to discuss this here). There are already 100.000 animals records saved on DB. New Feature One year later, the client wants a new feature. It is really important to him to know the animals classes, and this is a required field. So he asks me to put a field to input the animal class, and this field is required. Or maybe where this animal was discovered. Problem I have already 100.000 recorded animals without a class or where it was discovered, but I need to insert a new column to storage this information and this column can't be null. I don't have a default value for this situation (there isn't a default animal class or where it was discovered). I don't want to keep the requirement rule only on my software, my DB must have this requirement too(I like to keep business rules on DB too). What are the alternatives to solve this situation? I am on a situation that this new feature cannot be previewed or reviewed for the existing records. The time already passed and I can't go back on time to get it

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >