Search Results

Search found 6020 results on 241 pages for 'valid'.

Page 35/241 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Multiple possible jsp views for a request

    - by Karl Walsh
    I'm looking to offer the user some way of changing how a single page looks based on some pre-defined jsps. i.e. Two or more jsp's contain similar information, and would be backed by a single controller method. The controller would decide which view to return. Is there a common way of achieving this? At the moment I have some administration screens where I control a list of possible views. The user can then choose which one to see from a drop-down. My current issue is that I don't know how to confirm (at the admin screen) that the view is valid. Is there a way of asking spring for all possible views so I can filter them and resent a drop-down on the admin screen rather than a free text field? If not is there a way of asking spring if a single view is valid? All these views will reside under a common directory, so it would probably be possible to scan recursively from that point and build a list of possible views. This goes beyond simply changing the css, since the page content might be different despite being backed by the same model.

    Read the article

  • Linqpad with Table Storage

    - by kaleidoscope
    LinqPad as we all know has been a wonderful tool for running ad-hoc queries. With Azure Table storage in picture LinqPad was no longer in picture and we shifted focus to Cloud Storage Studio only to realize the limited and strange querying capabilities of CSS. With some tweaking to Linqpad we can get the comfortable old shoe of ad-hoc queries with LinqPad in the Azure Table storage. Steps: 1. Start LinqPad 2. Right Click in the query window and select “Query Properties” 3. In The Additional References add reference to Microsoft.WindowsAzure.StorageClient, System.Data.Services.Client.dll and the assembly containing the implementation of the DataServiceContext class tied to the Azure table storage. 4. In the additional namespace imports import the same three namespaces mentioned above. 5. Then we need to provide following details. a. Table storage account name and shared key. b. DataServiceContext implementing class in your code. c. A LINQ query. e.x. var storageAccountName = "myStorageAccount";  // Enter valid storage account name var storageSharedKey = "mysharedKey"; // Enter valid storage account shared key var uri = new System.Uri("http://table.core.windows.net/"); var storageAccountInfo = new CloudStorageAccount(new StorageCredentialsAccountKey(storageAccountName, storageSharedKey), false); var serviceContext = new TweetPollDataServiceContext(storageAccountInfo); // Specify the DataServiceContext implementation // The query var query = from row in serviceContext.Table select row;         query.Dump(); Sarang, K

    Read the article

  • Database Connectivity Test with UDL File

    - by Ben Griswold
    I bounced around between projects a lot last week.  What each project had in common was the need to validate at least one SQL connection.  Whether you have SQL tools like SSMS installed or not, this is a very easy task if you are aware of the UDL (Universal Data Link) files.  Create a new file and name it anything as long as it has the .udl extension. Open the file, choose a provider: Click Next >> or navigate to the Connection Tab to provide connection information.  Once you provide server and login credentials, the database list will populate.  At this point, you know the connection is valid. but go ahead and click the Test Connection button anyway. On the final tab, you can provide extra connection information like Application Name which can come in handy.  The All tab is beneficial if you want to build a valid connection string to include in your own applications.  If you save the file and then open in Notepad, you’ll find that said connection string: Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=(local);Application Name=TestApp I hope this tip helps save you some time.  How do you test if you don’t have SSMS installed?

    Read the article

  • Slick 2D first trial error

    - by pringlesinn
    I followed some advices to learn Slick2D and when I started doing the "SimpleGame" I got my first error. Does anyone have any idea of what is it and how to fix? Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:Slick Build #274 Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:LWJGL Version: 2.0b1 Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:OriginalDisplayMode: 1024 x 768 x 16 @60Hz Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:TargetDisplayMode: 800 x 600 x 0 @0Hz Sun Dec 26 23:09:12 GMT-03:00 2010 ERROR:Could not find a valid pixel format org.lwjgl.LWJGLException: Could not find a valid pixel format at org.lwjgl.opengl.WindowsPeerInfo.nChoosePixelFormat(Native Method) at org.lwjgl.opengl.WindowsPeerInfo.choosePixelFormat(WindowsPeerInfo.java:52) at org.lwjgl.opengl.WindowsDisplayPeerInfo.initDC(WindowsDisplayPeerInfo.java:54) at org.lwjgl.opengl.WindowsDisplay.createWindow(WindowsDisplay.java:158) at org.lwjgl.opengl.Display.createWindow(Display.java:299) at org.lwjgl.opengl.Display.create(Display.java:848) at org.lwjgl.opengl.Display.create(Display.java:800) at org.newdawn.slick.AppGameContainer.tryCreateDisplay(AppGameContainer.java:299) at org.newdawn.slick.AppGameContainer.access$000(AppGameContainer.java:34) at org.newdawn.slick.AppGameContainer$2.run(AppGameContainer.java:364) at java.security.AccessController.doPrivileged(Native Method) at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:345) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at SimpleGame.main(SimpleGame.java:38) Exception in thread "main" org.newdawn.slick.SlickException: Failed to initialise the LWJGL display at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:375) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at SimpleGame.main(SimpleGame.java:38)

    Read the article

  • Understand how the TLB (Translation Lookaside buffer) works and interacts with pagetable and addresses

    - by Darxval
    So I am trying to understand this TLB (Translation Lookaside Buffer). But I am having a hard time grasping it. in context of having two streams of addresses, tlb and pagetable. I don't understand the association of the TLB to the streamed addresses/tags and page tables. a. 4669, 2227, 13916, 34587, 48870, 12608, 49225 b. 12948, 49419, 46814, 13975, 40004, 12707 TLB Valid Tag Physical Page Number 1 11 12 1 7 4 1 3 6 0 4 9 Page Table Valid Physical Page or in Disk 1 5 0 Disk 0 Disk 1 6 1 9 1 11 0 Disk 1 4 0 Disk 0 Disk 1 3 1 12 How does the TLB work with the pagetable and addresses? The homework question given is: Given the address stream in the table, and the initial TLB and page table states shown above, show the final state of the system also list for each reference if it is a hit in the TLB, a hit in the page table or a page fault. But I think first i just need to know how does the TLB work with these other elements and how to determine things. How do I even start to answer this question?

    Read the article

  • What is the best strategy for transforming unicode strings into filenames?

    - by David Cowden
    I have a bunch (thousands) of resources in an RDF/XML file. I am writing a certain subset of the resources to files -- one file for each, and I'm using the resource's title property as the file name. However, the titles are every day article, website, and blog post titles, so they contain characters unsafe for a URI (the necessary step for constructing a valid file path). I know of the Jersey UriBuilder but I can't quite get it to work for my needs as I detailed in a different question on SO. Some possibilities I have considered are: Since each resource should also have an associated URL, I could try to use the name of the file on the server. The down side of this is sometimes people don't name their content logically and I think the title of an article better reflects the content that will be in each text file. Construct a white list of valid characters and parse the string myself defining substitutions for unsafe characters. The downside of this is the result could be just as unreadable as the former solution because presumably the content creators went through a similar process when placing the files on their server. Choose a more generic naming scheme, place the title in the text file along with the other attributes, and tell my boss to live with it. So my question here is, what methods work well for dealing with a scenario where you need to construct file names out of strings with potentially unsafe characters? Is there a solution that better fills out my constraints?

    Read the article

  • mount another drive to the same directory

    - by Ken Autotron
    I recently purchased a server that was advertised as 2TB (2 1TB drives) in size, when I use it it reports only one of the drives, I would like to be able to use both as if one drive. here is the specs... sudo lshw -C disk *-disk description: ATA Disk product: TOSHIBA DT01ACA1 vendor: Toshiba physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/sda version: MS2O serial: 13EJ81XPS size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=0005b3dd *-disk description: ATA Disk product: TOSHIBA DT01ACA1 vendor: Toshiba physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sdb version: MS2O serial: 13OX3TKPS size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00030e86 and fdisk -l Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00030e86 Device Boot Start End Blocks Id System /dev/sdb1 * 4096 41947135 20971520 fd Linux raid autodetect /dev/sdb2 41947136 1952468991 955260928 fd Linux raid autodetect /dev/sdb3 1952468992 1953519615 525312 82 Linux swap / Solaris Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0005b3dd Device Boot Start End Blocks Id System /dev/sda1 * 4096 41947135 20971520 fd Linux raid autodetect /dev/sda2 41947136 1952468991 955260928 fd Linux raid autodetect /dev/sda3 1952468992 1953519615 525312 82 Linux swap / Solaris Disk /dev/md2: 978.2 GB, 978187124736 bytes 2 heads, 4 sectors/track, 238815216 cylinders, total 1910521728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/md1: 21.5 GB, 21474770944 bytes 2 heads, 4 sectors/track, 5242864 cylinders, total 41942912 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table is it possible to mount both drives to say /Home/ so I would have 2TB of usable space?

    Read the article

  • Apache (XAMPP 1.8.0) access.log/Intrusion Detection Concern

    - by Andy Holaday
    [I originally posted on SO but it earned me a Tumbleweed badge. This looks like a better venue for the question.] I have Apache (XAMPP 1.8.0) running on Vista Pro x64. A couple times now I have seen a pattern like the example below in access.log. Concerning is the "attack" seems to somehow shift from a public IP to a valid private IP on my network (happens to be the WAN address of one of my routers). Two questions: How is this possible, and what happens if the "attacker" stumbles on a valid request? I've googled this to no avail. 177.0.X.X - - [03/Jun/2012:08:19:34 -0400] "GET /phpMyAdmin-2.5.4/index.php HTTP/1.1" 403 177.0.X.X - - [03/Jun/2012:08:19:34 -0400] "GET /phpMyAdmin-2.5.5-rc1/index.php HTTP/1.1" 403 177.0.X.X - - [03/Jun/2012:08:19:34 -0400] "GET /phpMyAdmin-2.2.6/index.php HTTP/1.1" 403 177.0.X.X - - [03/Jun/2012:08:19:34 -0400] "GET /phpMyAdmin-2.5.5-rc2/index.php HTTP/1.1" 403 192.168.15.3 - - [03/Jun/2012:08:19:56 -0400] "GET /phpMyAdmin-2.5.6-rc2/index.php HTTP/1.1" 403 177.0.X.X - - [03/Jun/2012:08:19:56 -0400] "GET /phpMyAdmin-2.5.6-rc1/index.php HTTP/1.1" 403 177.0.X.X - - [03/Jun/2012:08:19:56 -0400] "GET /phpMyAdmin-2.5.5-pl1/index.php HTTP/1.1" 403 192.168.15.3 - - [03/Jun/2012:08:19:59 -0400] "GET /phpMyAdmin-2.5.7/index.php HTTP/1.1" 403 192.168.15.3 - - [03/Jun/2012:08:20:01 -0400] "GET /phpMyAdmin-2.5.7-pl1/index.php HTTP/1.1" 403 192.168.15.3 - - [03/Jun/2012:08:20:02 -0400] "GET HTTP/1.1" 400 1060 "-" "-"

    Read the article

  • DDD Model Design and Repository Persistence Performance Considerations

    - by agarhy
    So I have been reading about DDD for some time and trying to figure out the best approach on several issues. I tend to agree that I should design my model in a persistent agnostic manner. And that repositories should load and persist my models in valid states. But are these approaches realistic practically? I mean its normal for a model to hold a reference to a collection of another type. Persisting that model should mean persist the entire collection. Fine. But do I really need to load the entire collection every time I load the model? Probably not. So I can have specialized repositories. Some that load maybe a subset of the object graph via DTOs and others that load the entire object graph. But when do I use which? If I have DTOs, what's stopping client code from directly calling them and completely bypassing the model? I can have mappers and factories to create my models from DTOs maybe? But depending on the design of my models that might not always work. Or it might not allow my models to be created in a valid state. What's the correct approach here?

    Read the article

  • Page_BlockSubmit - reset it to False, if there is a scenario when page doesn't postback on validation error

    - by Vipin
    Recently, I was facing a problem where if there was a validation error, and if I changed the state of checkbox it won't postback on first attempt. But when I uncheck and check again , it postbacks on second attempt...this is some quirky behaviour in .ASP.Net platform. The solution was to reset Page_BlockSubmit flag to false and it works fine. The following explanation is from http://lionsden.co.il/codeden/?p=137&cpage=1#comment-143   Submit button on the page is a member of vgMain, so automatically it will only run the validation on that group. A solution is needed that will run validation on multiple groups and block the postback if needed. Solution Include the following function on the page: function DoValidation() { //validate the primary group var validated = Page_ClientValidate('vgPrimary ');   //if it is valid if (validated) { //valid the main group validated = Page_ClientValidate('vgMain'); }   //remove the flag to block the submit if it was raised Page_BlockSubmit = false;   //return the results return validated; } Call the above function from the submit button’s OnClientClick event. <asp:Button runat="server" ID="btnSubmit" CausesValidation="true" ValidationGroup="vgMain" Text="Next" OnClick="btnSubmit_Click" OnClientClick="return DoValidation();" /> What is Page_BlockSubmit When the user clicks on a button causing a full post back, after running Page_ClientValidate ASP.NET runs another built in function ValidatorCommonOnSubmit. Within Page_ClientValidate, Page_BlockSubmit is set based on the validation. The postback is then blocked in ValidatorCommonOnSubmit if Page_BlockSubmit is true. No matter what, at the end of the function Page_BlockSubmit is always reset back to false. If a page does a partial postback without running any validation and Page_BlockSubmit has not been reset to false, the partial postback will be blocked. In essence the above function, RunValidation, acts similar to ValidatorCommonOnSubmit. It runs the validation and then returns false to block the postback if needed. Since the built in postback is never run, we need to reset Page_BlockSubmit manually before returning the validation result.

    Read the article

  • Apache redirecting: reason unknown

    - by Sinan
    I have a simple php script. The script is not important. It just prints out $_SERVER. When I request an URL like www.server.com/?ref=bar everything is fine. However if the request contains something like www.server.com/?ref=http://www.test.com (?ref=http%3A%2F%2Fwww.test.com) the server redirects to 403.shtml. No redirect for http://x but redirects http://x.y As far as I can understand somehow the server doesn't like "http://x". It always redirects to 403.shtml when there is a valid query string in the form of a valid url. my .htacess file is the same both on my server and local test server and local test server behaves as expected (no redirects). So I don't it is related to .htaccess. I'm on shared host on Hostgator. Can anyone help? Edit: Here's the .htaccess file Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?$1 [L,QSA] When there is an http://xx.x it redirects to 403 even if there is physical file. However if I remove the .htaccess redirect to 403 also disappears. But I need the above .htaccess file. Is there a way to get around this?

    Read the article

  • Do we really need a thousand Linux distributions?

    - by nebukadnezzar
    Pointed from an answer to a (possibly related) question, I came across this graphic, and I'm shocked how many linux distributions currently exist. However, it seems that most of these distributions are forks of already popular distributions with minimal changes, usually limited to themes, wallpapers, buttons, the kind of stuff most people probably wouldn't see as a reason to fork a Linux distribution. Of course, someone will always say "Opensource is also about the freedom of choice", and while I wholeheartedly agree, I do not believe that this is a valid reason to fork an already perfectly working Distribution into a new one, which might possibly result in less security/stability due to smaller group of developers. There's another problem: Those, who want to switch to Linux, are confronted with a neverending list of Linux distributions, and wonder rightfully which they're supposed to chose (infact, I was facing that problem before I've discovered Ubuntu). There might be (very few) valid reasons to fork a distribution: Specializing on a particular topic (FOSS Only, work-related topic (i.e., for a Hospital), etc) An exceptional architecture, that requires a special set of software Use of non-FOSS, propietary technology, and such But even with these points in mind, it would still seem easier to create a subdistribution with the required changes, such as XUbuntu with XFCE4, KUbuntu with KDE4, Fluxbuntu with Fluxbox, etc. So, why exactly do we need so many distributions?

    Read the article

  • SQL – Crossword Puzzle Based on Course Building Successful High Traffic Profitable Blog

    - by Pinal Dave
    Do you like Crossword Puzzles? I personally love it. Everytime I open the newspaper, I try to resolve at least one crossword or sudoku. It is just fun to tease a brain little and stretch its limits. Regular readers of the blogs are aware that I have recently published two courses on how to build successful high traffic profitable blog. Here are the links to watch both the courses: Course 1, Course 2. Do watch them in order as both the courses have unique content, which can help you build a better blog. On my birthday July 30th, there was an interesting blog post posted on Pluralsight blog. It was a crossword build from my two courses. I encourage you try to solve the crossword which I have built. Giveaway: There is a cool gift for the winner – it is melting clock. Do not confuse this as a dummy or not working clock. This looks like melting but it always shows accurate time and it is perfectly balanced to hang off of any flat surface. How to Participate: Well, it is very simple, you just have to complete the crossword and send it to me at pinal at sqlauthority.com with all valid answers. The deadline is that you must send it before Monday August 5, 2013 or before the valid answer keys are posted on Pluralsight blog. Hints: Though the crossword is very easy and intuitive, if you ever get stuck anywhere here are two hints: Hint 1, Hint 2. Login to Pluralsight courses and watch both the courses. Watching the course will not only help you to easily complete crossword but there are hidden gems and secrets to build a high traffic profitable blog. Here is the link to download the crossword: Download Crossword. Alternatively you can download the image displayed below and print it as well.   Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Blogging

    Read the article

  • How to mount an external HDD?

    - by Slash
    I have Ubuntu Linux 12.04 version the latest right now.I want to mount an external HDD NTFS 1TB.I have followed many guides but still no success.The error I'm getting is this: Failed to read last sector (1953523119): Invalid argument HINTS: Either the volume is a RAID/LDM but it wasn't setup yet, or it was not setup correctly (e.g. by not using mdadm --build ...), or a wrong device is tried to be mounted, or the partition table is corrupt (partition is smaller than NTFS), or the NTFS boot sector is corrupt (NTFS size is not valid). Failed to mount '/dev/sdb1': Invalid argument The device '/dev/sdb1' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? Using Storage Device MAnager i get this error:Error mounting: mount exited with exit code 1: helper failed with: mount: only root can mount /dev/sdb1 on /media/Skliros_Diskos {external disk name} When I use sudo fdisk -l, this is the output: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e0bc6 Device Boot Start End Blocks Id System /dev/sda1 * 2048 618854399 309426176 83 Linux /dev/sda2 618856446 625141759 3142657 5 Extended /dev/sda5 618856448 625141759 3142656 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000202043392 bytes 255 heads, 63 sectors/track, 121600 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002093a Device Boot Start End Blocks Id System /dev/sdb1 2048 1953525167 976761560 7 HPFS/NTFS/exFAT

    Read the article

  • SSMS Tools Pack 2.5 is out. Added support for SQL Server 2012.

    - by Mladen Prajdic
    Because I wanted to make SSMS Tools Pack as solid as possible for SSMS 2012 there are no new features only bug fixes and speed improvements. I am planning new awesome features for the next version so be on the lookout. The biggest change is that SSMS Tools Pack for SSMS 2012 is no longer free. For previous SSMS versions it is still free. Licensing now offers following options: Per machine license. ($29.99) Perfect if you do all your work from a single machine. This license is valid per major release of SSMS Tools Pack (e.g. v2.x, v3.x, v4.x). Fully transferable license valid for 3 months. ($99.99) Perfect for work across many machines. It's not bound to a machine or an SSMS Tools Pack version. 30 days license. Time based demo license bound to a machine. You can view all the details on the Licensing page. If you want to receive email notifications when new version of SSMS Tools Pack is out you can do that on the Main page or on the Download page. This is also the last version to support SSMS 2005 and 2005 Express. Enjoy it!

    Read the article

  • Pythonic use of the isinstance function?

    - by Pace
    Whenever I find myself wanting to use the isinstance() function I usually know that I'm doing something wrong and end up changing my ways. However, in this case I think I have a valid use for it. I will use shapes to illustrate my point although I am not actually working with shapes. I am parsing XML configuration files that look like the following: <square> <width>7</width> </square> <rectangle> <width>5</width> <height>7</height> </rectangle> <circle> <radius>4</radius> </circle> For each element I create an instance of the Shape class and build up a list of Shape objects in a class called the ShapeContainer. Different parts of the rest of my application need to refer to the ShapeContainer to get certain shapes. Depending on what the code is doing it might need just rectangles, or it might operate on all quadrangles, or it might operate on all shapes. I have created the following function in the ShapeContainer class (the actual function uses a list comprehension but I have expanded it here for readability): def locate(self, shapeClass): result = [] for shape in self.__shapes: if isinstance(shape,shapeClass): result.append(shape) return result Is this a valid use of the isinstance function? Is there another way I can do this which might be more pythonic?

    Read the article

  • What is the correct way to restart udev in Ubuntu?

    - by zerkms
    I've changed the name of my eth1 interface to eth0. How to ask udev now to re-read the config? service udev restart and udevadm control --reload-rules don't help. So is there any valid way except of rebooting? (yes, reboot helps with this issue) UPD: yes, I know I should prepend the commands with sudo, but either one I posted above changes nothing in ifconfig -a output: I still see eth1, not eth0. UPD 2: I just changed the NAME property of udev-rule line. Don't know any reason for this to be ineffective. There is no any error in executing of both commands I've posted above, but they just don't change actual interface name in ifconfig -a output. If I perform reboot - then interface name changes as expected. UPD 3: let I explain all the case better ;-) For development purposes I write some script that clones virtual machines (VirtualBox-driven) and pre-sets them up in some way. So I perform a command to clone VM, start it and as long as network interface MAC is changed - udev adds the second rule to network persistent rules. Right after machine is booted for the first time there are 2 rules: eth0, which does not exist, as long as it existed in the original VM image MAC eth1, which exists, but all the configuration in all files refers to eth0, so it is not that good for me So I with sed delete the line with eth0 (it is obsolete and useless in cloned image) and replace eth1 with eth0. So currently I have valid persistent rule, but there is still eth1 in /dev. The issue: I don't want to reboot the machine (it will take another time, which is not good thing on building-VM-stage) and just want to have my /dev rebuilt with some command so I have ready-to-use VM without any reboots.

    Read the article

  • Mounting an encrypted partion Error

    - by indiajoe
    Using the disk utilities in ubuntu 11.04, i had encrypted a partition with a passphrase. Each time i used to click on the partition to mount, it used to ask me the passphrase and get mounted. All was fine, until i installed the 12.04. After the installation, this encrypted partition, disappeared from the menu. fdisk -l /dev/sda Shows the encrypted partition in the list /dev/sda7 298953648 488392064 94719208+ 7 HPFS/NTFS/exFAT I tried the following commands to mount it. But they all gave following errors $ sudo cryptsetup luksDump /dev/sda7 Device /dev/sda7 is not a valid LUKS device. $ ecryptfs-unwrap-passphrase /dev/sda7 Passphrase: # i entered the correct passphrase here... Error: Unwrapping passphrase failed [-5] Info: Check the system log for more information from libecryptfs $ grep ecryptfs /var/log/syslog Oct 31 22:43:51 benny ecryptfs-unwrap-passphrase: Error attempting to open [/dev/sda7] for reading Nov 1 01:28:02 benny ecryptfs-unwrap-passphrase: Error attempting to open [/dev/sda7] for reading Nov 1 01:29:06 benny ecryptfs-unwrap-passphrase: Error attempting to open [/dev/sda7] for reading I don't understand why I am getting the "Device /dev/sda7 is not a valid LUKS device." Could it be due to some corruption in partition table? Is there any way to recover this encrypted partition? Thanks indiajoe

    Read the article

  • What's so bad about pointers in C++?

    - by Martin Beckett
    To continue the discussion in Why are pointers not recommended when coding with C++ Suppose you have a class that encapsulates objects which need some initialisation to be valid - like a network socket. // Blah manages some data and transmits it over a socket class socket; // forward declaration, so nice weak linkage. class blah { ... stuff TcpSocket *socket; } ~blah { // TcpSocket dtor handles disconnect delete socket; // or better, wrap it in a smart pointer } The ctor ensures that socket is marked NULL, then later in the code when I have the information to initialise the object. // initialising blah if ( !socket ) { // I know socket hasn't been created/connected // create it in a known initialised state and handle any errors // RAII is a good thing ! socket = new TcpSocket(ip,port); } // and when i actually need to use it if (socket) { // if socket exists then it must be connected and valid } This seems better than having the socket on the stack, having it created in some 'pending' state at program start and then having to continually check some isOK() or isConnected() function before every use. Additionally if TcpSocket ctor throws an exception it's a lot easier to handle at the point a Tcp connection is made rather than at program start. Obviously the socket is just an example, but I'm having a hard time thinking of when an encapsulated object with any sort of internal state shouldn't be created and initialised with new.

    Read the article

  • disk space worngly displayed after installing LVM disk

    - by Ubuntuser
    I installed ubuntu server using LVM partitioning on a 1 TB hard disk. However, after installation, i can only see 10 Gig space here is the fidsk output ` # fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00041507 Device Boot Start End Blocks Id System /dev/sda1 * 1 10 71680 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 10 121602 976689152 8e Linux LVM Disk /dev/dm-0: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/dm-1: 2147 MB, 2147483648 bytes 255 heads, 63 sectors/track, 261 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-1 doesn't contain a valid partition table You have new mail in /var/mail/root and df -h output df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/system-root 9.9G 6.6G 2.8G 71% / devtmpfs 1.9G 232K 1.9G 1% /dev tmpfs 1.9G 4.0K 1.9G 1% /dev/shm /dev/sda1 68M 22M 43M 34% /boot tmpfs 6.0G 0 6.0G 0% /var/spool/asterisk/monitor You have new mail in /var/mail/root ` any way to increase this space without reisntalling?

    Read the article

  • Displaying possible movement tiles

    - by Ash Blue
    What's the fastest way to highlight all possible movement tiles for a player on a square grid? Players can only move up, down, left, right. Tiles can cost more than one movement, multiple levels are available to move, and players can be larger than one tile. Think of games like Fire Emblem, Front Mission, and XCOM. My first thought was to recursively search for connecting tiles. This quickly demonstrated many shortcomings when blockers, movement costs, and other features were added into the mix. My second thought was to use an A* pathfinding algorithm to check all tiles presumed valid. Presumed valid tiles would come from an algorithm that generates a diamond of tiles from the player's speed (see example here http://jsfiddle.net/truefreestyle/Suww8/9/). Problem is this seems a little slow and expensive. Is there a faster way? Edit: In Lua for Corona SDK, I integrated the following movement generation controller. I've linked to a Gist here because the solution is around 90 lines of code. https://gist.github.com/ashblue/5546009

    Read the article

  • Need help with cybersquatting complaint: can a domain name forward AND resolve at same time? [on hold]

    - by Alan
    Probably a silly question for you pros... but for this novice here, I just want to make sure my understanding is correct. Context: I am trying to prove that a domain name owner has been cybersquatting and has never used the domain name in question. There are 4 shots from WayBackMachine over a three-year period that show the domain name resolving to a basic server index page with either no files or a single cgi-bin folder. The domain name owner claims, however, that the domain name was forwarded over the entire time from to another website, and that these captures probably coincided with occasional "outages." It is my understanding that: a) domain name forwarding is binary: if a domain name is forwarded to a valid site, it cannot simultaneously resolve to a valid IP address. Is this correct? b) domain name forwarding is not subject to "outages": servers can have outages, and websites can be down, but the forwarding itself cannot be down, as this is simply a pointer. (Or, the entire registrar where the DNS settings are hosted would have to malfunction. Is this correct? FINALLY, bonus question for pro webmasters: What is the likelihood that the WayBackMachine would capture the domain name on just those occasions when the webmaster disabled forwarding to supposedly work on the new site? Mucho thanks in advance!

    Read the article

  • Using LINQ to Twitter OAuth with Windows 8

    - by Joe Mayo
    In previous posts, I explained how to use LINQ to Twitter with Windows 8, but the example was a Twitter Search, which didn’t require authentication. Much of the Twitter API requires authentication, so this post will explain how you can perform OAuth authentication with LINQ to Twitter in a Windows 8 Metro-style application. Getting Started I have earlier posts on how to create a Windows 8 app and add pages, so I’ll assume it isn’t necessary to repeat here. One difference is that I’m using Visual Studio 2012 RC and some of the terminology and/or library code might be slightly different.  Here are steps to get started: Create a new Windows metro style app, selecting the Blank App project template. Create a new Basic Page and name it OAuth.xaml.  Note: You’ll receive a prompt window for adding files and you should click Yes because those files are necessary for this demo. Add a new Basic Page named TweetPage.xaml. Open App.xaml.cs and change !rootFrame.Navigate(typeof(MainPage)) to !rootFrame.Navigate(typeof(TweetPage)). Now that the project is set up you’ll see the reason why authentication is required by setting up the TweetPage. Setting Up to Tweet a Status In this section, I’ll show you how to set up the XAML and code-behind for a tweet.  The tweet logic will check to see if the user is authenticated before performing the tweet. To tweet, I put a TextBox and Button on the XAML page. The following code omits most of the page, concentrating primarily on the elements of interest in this post: <StackPanel Grid.Row="1"> <TextBox Name="TweetTextBox" Margin="15" /> <Button Name="TweetButton" Content="Tweet" Click="TweetButton_Click" Margin="15,0" /> </StackPanel> Given the UI above, the user types the message they want to tweet, and taps Tweet. This invokes TweetButton_Click, which checks to see if the user is authenticated.  If the user is not authenticated, the app navigates to the OAuth page.  If they are authenticated, LINQ to Twitter does an UpdateStatus to post the user’s tweet.  Here’s the TweetButton_Click implementation: void TweetButton_Click(object sender, RoutedEventArgs e) { PinAuthorizer auth = null; if (SuspensionManager.SessionState.ContainsKey("Authorizer")) { auth = SuspensionManager.SessionState["Authorizer"] as PinAuthorizer; } if (auth == null || !auth.IsAuthorized) { Frame.Navigate(typeof(OAuthPage)); return; } var twitterCtx = new TwitterContext(auth); Status tweet = twitterCtx.UpdateStatus(TweetTextBox.Text); new MessageDialog(tweet.Text, "Successful Tweet").ShowAsync(); } For authentication, this app uses PinAuthorizer, one of several authorizers available in the LINQ to Twitter library. I’ll explain how PinAuthorizer works in the next section. What’s important here is that LINQ to Twitter needs an authorizer to post a Tweet. The code above checks to see if a valid authorizer is available. To do this, it uses the SuspensionManager class, which is part of the code generated earlier when creating OAuthPage.xaml. The SessionState property is a Dictionary<string, object> and I’m using the Authorizer key to store the PinAuthorizer.  If the user previously authorized during this session, the code reads the PinAuthorizer instance from SessionState and assigns it to the auth variable. If the user is authorized, auth would not be null and IsAuthorized would be true. Otherwise, the app navigates the user to OAuthPage.xaml, which I’ll discuss in more depth in the next section. When the user is authorized, the code passes the authorizer, auth, to the TwitterContext constructor. LINQ to Twitter uses the auth instance to build OAuth signatures for each interaction with Twitter.  You no longer need to write any more code to make this happen. The code above accepts the tweet just posted in the Status instance, tweet, and displays a message with the text to confirm success to the user. You can pull the PinAuthorizer instance from SessionState, instantiate your TwitterContext, and use it as you need. Just remember to make sure you have a valid authorizer, like the code above. As shown earlier, the code navigates to OAuthPage.xaml when a valid authorizer isn’t available. The next section shows how to perform the authorization upon arrival at OAuthPage.xaml. Doing the OAuth Dance This section shows how to authenticate with LINQ to Twitter’s built-in OAuth support. From the user perspective, they must be navigated to the Twitter authentication page, add credentials, be navigated to a Pin number page, and then enter that Pin in the Windows 8 application. The following XAML shows the relevant elements that the user will interact with during this process. <StackPanel Grid.Row="2"> <WebView x:Name="OAuthWebBrowser" HorizontalAlignment="Left" Height="400" Margin="15" VerticalAlignment="Top" Width="700" /> <TextBlock Text="Please perform OAuth process (above), enter Pin (below) when ready, and tap Authenticate:" Margin="15,15,15,5" /> <TextBox Name="PinTextBox" Margin="15,0,15,15" Width="432" HorizontalAlignment="Left" IsEnabled="False" /> <Button Name="AuthenticatePinButton" Content="Authenticate" Margin="15" IsEnabled="False" Click="AuthenticatePinButton_Click" /> </StackPanel> The WebView in the code above is what allows the user to see the Twitter authentication page. The TextBox is for entering the Pin, and the Button invokes code that will take the Pin and allow LINQ to Twitter to complete the authentication process. As you can see, there are several steps to OAuth authentication, but LINQ to Twitter tries to minimize the amount of code you have to write. The two important parts of the code to make this happen are the part that starts the authentication process and the part that completes the authentication process. The following code, from OAuthPage.xaml.cs, shows a couple events that are instrumental in making this process happen: public OAuthPage() { this.InitializeComponent(); this.Loaded += OAuthPage_Loaded; OAuthWebBrowser.LoadCompleted += OAuthWebBrowser_LoadCompleted; } The OAuthWebBrowser_LoadCompleted event handler enables UI controls when the browser is done loading – notice that the TextBox and Button in the previous XAML have their IsEnabled attributes set to False. When the Page.Loaded event is invoked, the OAuthPage_Loaded handler starts the OAuth process, shown here: void OAuthPage_Loaded(object sender, RoutedEventArgs e) { auth = new PinAuthorizer { Credentials = new InMemoryCredentials { ConsumerKey = "", ConsumerSecret = "" }, UseCompression = true, GoToTwitterAuthorization = pageLink => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => OAuthWebBrowser.Navigate(new Uri(pageLink, UriKind.Absolute))) }; auth.BeginAuthorize(resp => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { switch (resp.Status) { case TwitterErrorStatus.Success: break; case TwitterErrorStatus.RequestProcessingException: case TwitterErrorStatus.TwitterApiError: new MessageDialog(resp.Error.ToString(), resp.Message).ShowAsync(); break; } })); } The PinAuthorizer, auth, a field of this class instantiated in the code above, assigns keys to the Credentials property. These are credentials that come from registering an application with Twitter, explained in the LINQ to Twitter documentation, Securing Your Applications. Notice how I use Dispatcher.RunAsync to marshal the web browser navigation back onto the UI thread. Internally, LINQ to Twitter invokes the lambda expression assigned to GoToTwitterAuthorization when starting the OAuth process.  In this case, we want the WebView control to navigate to the Twitter authentication page, which is defined with a default URL in LINQ to Twitter and passed to the GoToTwitterAuthorization lambda as pageLink. Then you need to start the authorization process by calling BeginAuthorize. This starts the OAuth dance, running asynchronously.  LINQ to Twitter invokes the callback assigned to the BeginAuthorize parameter, allowing you to take whatever action you need, based on the Status of the response, resp. As mentioned earlier, this is where the user performs the authentication process, enters the Pin, and clicks authenticate. The handler for authenticate completes the process and saves the authorizer for subsequent use by the application, as shown below: void AuthenticatePinButton_Click(object sender, RoutedEventArgs e) { auth.CompleteAuthorize( PinTextBox.Text, completeResp => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { switch (completeResp.Status) { case TwitterErrorStatus.Success: SuspensionManager.SessionState["Authorizer"] = auth; Frame.Navigate(typeof(TweetPage)); break; case TwitterErrorStatus.RequestProcessingException: case TwitterErrorStatus.TwitterApiError: new MessageDialog(completeResp.Error.ToString(), completeResp.Message).ShowAsync(); break; } })); } The PinAuthorizer CompleteAuthorize method takes two parameters: Pin and callback. The Pin is from what the user entered in the TextBox prior to clicking the Authenticate button that invoked this method. The callback handles the response from completing the OAuth process. The completeResp holds information about the results of the operation, indicated by a Status property of type TwitterErrorStatus. On success, the code assigns auth to SessionState. You might remember SessionState from the previous description of TweetPage – this is where the valid authorizer comes from. After saving the authorizer, the code navigates the user back to TweetPage, where they can type in a message, click the Tweet button, and observe that they have successfully tweeted. Summary You’ve seen how to get started with using LINQ to Twitter in a Metro-style application. The generated code contained a SuspensionManager class with way to manage information across multiple pages via its SessionState property. You also saw how LINQ to Twitter performs authorization in two steps of starting the process and completing the process when the user provides a Pin number. Remember to marshal callback thread back onto the UI – you saw earlier how to use Dispatcher.RunAsync to accomplish this. There were a few steps in the process, but LINQ to Twitter did minimize the amount of code you needed to write to make it happen. You can download the MetroOAuthDemo.zip sample on the LINQ to Twitter Samples Page.   @JoeMayo

    Read the article

  • How to write constructors which might fail to properly instantiate an object

    - by whitman
    Sometimes you need to write a constructor which can fail. For instance, say I want to instantiate an object with a file path, something like obj = new Object("/home/user/foo_file") As long as the path points to an appropriate file everything's fine. But if the string is not a valid path things should break. But how? You could: 1. throw an exception 2. return null object (if your programming language allows constructors to return values) 3. return a valid object but with a flag indicating that its path wasn't set properly (ugh) 4. others? I assume that the "best practices" of various programming languages would implement this differently. For instance I think ObjC prefers (2). But (2) would be impossible to implement in C++ where constructors must have void as a return type. In that case I take it that (1) is used. In your programming language of choice can you show how you'd handle this problem and explain why?

    Read the article

  • How do I initialize a Scala map with more than 4 initial elements in Java?

    - by GlenPeterson
    For 4 or fewer elements, something like this works (or at least compiles): import scala.collection.immutable.Map; Map<String,String> HAI_MAP = new Map4<>("Hello", "World", "Happy", "Birthday", "Merry", "XMas", "Bye", "For Now"); For a 5th element I could do this: Map<String,String> b = HAI_MAP.$plus(new Tuple2<>("Later", "Aligator")); But I want to know how to initialize an immutable map with 5 or more elements and I'm flailing in Type-hell. Partial Solution I thought I'd figure this out quickly by compiling what I wanted in Scala, then decompiling the resultant class files. Here's the scala: object JavaMapTest { def main(args: Array[String]) = { val HAI_MAP = Map(("Hello", "World"), ("Happy", "Birthday"), ("Merry", "XMas"), ("Bye", "For Now"), ("Later", "Aligator")) println("My map is: " + HAI_MAP) } } But the decompiler gave me something that has two periods in a row and thus won't compile (I don't think this is valid Java): scala.collection.immutable.Map HAI_MAP = (scala.collection.immutable.Map) scala.Predef..MODULE$.Map().apply(scala.Predef..MODULE$.wrapRefArray( scala.Predef.wrapRefArray( (Object[])new Tuple2[] { new Tuple2("Hello", "World"), new Tuple2("Happy", "Birthday"), new Tuple2("Merry", "XMas"), new Tuple2("Bye", "For Now"), new Tuple2("Later", "Aligator") })); I'm really baffled by the two periods in this: scala.Predef..MODULE$ I asked about it on #java on Freenode and they said the .. looked like a decompiler bug. It doesn't seem to want to compile, so I think they are probably right. I'm running into it when I try to browse interfaces in IntelliJ and am just generally lost. Based on my experimentation, the following is valid: Tuple2[] x = new Tuple2[] { new Tuple2<String,String>("Hello", "World"), new Tuple2<String,String>("Happy", "Birthday"), new Tuple2<String,String>("Merry", "XMas"), new Tuple2<String,String>("Bye", "For Now"), new Tuple2<String,String>("Later", "Aligator") }; scala.collection.mutable.WrappedArray<Tuple2> y = scala.Predef.wrapRefArray(x); There is even a WrappedArray.toMap() method but the types of the signature are complicated and I'm running into the double-period problem there too when I try to research the interfaces from Java.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >