Search Results

Search found 22515 results on 901 pages for 'created'.

Page 93/901 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Booting off a ZFS root in 14.04

    - by RJVB
    I've been running a Debian derivative (LMDE) on a ZFS root for half a year now. It was created by cloning a regular ext4-based install with all the necessary packages onto a ZFS pool, chrooting into that pool and recreating a grub menu and bootloader. The system uses an ext-3 dedicated /boot partition. I would like to do the same with Ubuntu 14.04, but have encountered several obstacles. There is no Trusty zfs-grub package The default grub package doesn't have ZFS support built in. I found a small bug in the build system responsible for that (report with patch created) and built my own grub packages. The built-in ZFS support is dysfunctional, it does not add the proper arguments to the kernel command line I thus installed the ZoL grub package I also use on my LMDE system, which does give me a correct grub.cfg However, even with that correct grub.cfg, the boot process apparently doesn't retrieve the bootfs parameter from the ZFS pool; instead the variable that's supposed to receive the value remains empty. As a result, initrd tries to load the default pool ("rpool"), which fails of course. I can however import the pool by hand, and complete the process by hand. If memory serves me well, I also had to disable apparmor, to avoid the boot process from blocking after importing the pool. Am I overlooking something? Just for comparison, I installed the Ubuntu 3.13 kernel on my LMDE system, and that works just fine (i.e. the identical kernel and grub binaries allow successful booting without glitches on LMDE but not on Ubuntu).

    Read the article

  • IP not detected in terremark enteprise cloud server - how to install VMware on instance?

    - by JohnMerlino
    Using terremark enteprise cloud, when you create a server, you assigned an IP address to them and that IP is visible under Detected IP when selecting the server. However, I created a server, with IP address and I created an internet service and connected it with a node. I used protocol TCP and mapped it to port 3001. But I notice when I select my server, the IP address doesnt dsplay under Detected IP and then I VPN Connect, launch terminal and try to SSH with the IP to my server, and I get connection timed out. I presume the reason lies in that the IP address is not being detected. Someone suggested that my VMware-Tools is out of date and in fact on the server instance for VMware-Tools it does say "out of date". I'm not sure how to mount the instance and install VMware-Tools. I am using Mac OSX. Someone said that it will only work on PC running IE.

    Read the article

  • Automate new AD user's home folder creation and permission setup

    - by vn.
    I know if we setup a base folder or a profile path in the Profile tab of an AD user, we can copy it and the folder creation and permission setup will be automated. My problem is that not all my users have a roaming profile and the home folder linking is done thru GPO. When I copy from these users, the home folder isn't created automatically and I have to create it manually and change permission and ownership on that folder, located on the fileserver. What should I do? A script may be nice but it'd have to be run everytime a new user is created and I don't think we can link a script to an AD user creation? I'd like to avoid any manual steps and keep my GPO that way. Using a W2008r2 DC on w7 client boxes. Thanks.

    Read the article

  • How to update debian dns server? New VM with same hostname as old VM

    - by opensourcechris
    We run several linux VM's on our Hyper-V cluster. Our old IT manager configured the dns server to resolve the url 'devlabs.ourdomain.com' to a debian squeeze apache webserver hosted on the hyper v cluster with the hostname: devlabs. We recently created a new Ubuntu vm to replace the original squeeze vm. When we created the new Ubuntu VM we used the same hostname of 'devlabs" to name the new VM. My problem is that now I am only able to access the new Ubuntu VM by using the IP address. How can I update our DNS server to point the url 'devlabs.ourdomain.com' to the new VM?

    Read the article

  • Find last of match string automatically

    - by jowan
    I want to make id for entries as long as 7 digits.. while first entry is created, it will get id is 0000001 And my problem is i want to get id and add to 1 every time new entry is created.. I have a bunch of code and still confuse to implement it. $str_rep = "0000123"; $str_rep2 = "0005123"; // My character string can be like this $str_rep3 = "0009123"; // My character string can be like this $match_number= array(1,2,3,4,5,6,7,8,9); // I create array to do it automatically but it was not work. // I do it manually $get_str = strstr($str_rep, "1"); $get_str = strstr($str_rep2, "5"); $get_str = strstr($str_rep3, "9"); // Result echo $get_str . "<br>"; echo $get_str2 . "<br>"; echo $get_str3 . "<br>"; Thanks in advance

    Read the article

  • Cron process not starting

    - by vkris
    I have an ec2 image created with cron jobs. These jobs fail to run; I discovered the cron process in itself has not started. So, I included /usr/sbin/cron in /etc/rc.d/rc.local and created another image. But still for some reason the cron process does not start on bootup. If I restart the machine, the cron process runs. It doesn't run when it boots up! Any reason why this is happening? Also, is there any other alternatives for this ?

    Read the article

  • Logical volume that spans raid1 sets: what happens if a RAID fails?

    - by Jeff Shattock
    Consider the following scenario: /dev/md0 - 10GB RAID 1 volume built from /dev/sda and /dev/sdb /dev/md1 - 10GB RAID 1 volume built from /dev/sdc and /dev/sdd /dev/vg0 - volume group containing md0 and md1 /dev/vg0/lv0 - 15GB logical volume The raid devices are created with mdadm; the logical volumes by LVM. What happens to lv0 if md0 fails entirely? That is, if both sda and sdb disintegrate so that the md0 device can not start. Is the portion of the data that resided on md1 still accessible, or is the entire LV gone? Would the answer change if lv0 were created as a striped volume vs non-striped?

    Read the article

  • Outlook mail/calendar items give errors after server migration

    - by Mike B
    Last Friday our Exchange server was migrated, by our external system administrator, to a new server, with a new server name. Since then we have problems with the calendar/mail items that were created/sent/received on the old server: Reply to mails get bounced if we use auto complete in the To field. If we cancel auto complete and manually enter the (same) e-mail address then there's no problem. Our system administrator says this is because auto complete fills in the old server name (???). Calendar items created on the old server cannot be edited (without an error) and must be recreated if we want to change them. Our system administrator says these problems are normal with a server migration. I cannot believe this. There must be a better way. Am I right?

    Read the article

  • Game Asset Management

    - by user964123
    I am making my first small mobile game in C# XNA. Lets say I have 3 screens, the main menu, options and game screen. A single game session usually lasts for 1 min, so the user will alternate frequently between the main menu and game screen. Therefore, once I load the textures for either screen, I want to keep them in memory to avoid frequent reloading. Both screens share some assets like their background textures, but differ in others. The first solution I came up with is making 2 texture factory classes, MainScreenAssetFactory and GameScreenAssetFactory, each with their own content manager, and ill store them in a globally accessible point so that they persist after either screen is destroyed. There is also a OptionsScreenAssetFactory, but that I dont want to cache it since the options screen is rarely visited. A typical Factory would look something like this public class MainScreenAssetFactory { private readonly ContentManager contentManager; public MainScreenAssetFactory(IServiceProvider serviceProvider, string rootDirectory) { contentManager = new ContentManager(serviceProvider) { RootDirectory = rootDirectory }; } public Texture2D ListElementBackground { get { return return contentManager.Load<Texture2D>("UserTab"); } } public Texture2D ListElementBulletPoint { get { return return contentManager.Load<Texture2D>("TabIcon"); } } public Texture2D LoggedOutUser { get { return return contentManager.Load<Texture2D>("LoggedOutUser"); } } } Since both Main, Options and Game Screen share some common resources, instead of loading them more than once, I created another class CommonAssetTexFactory which holds the common stuff and stays in-memory during the app lifetime. For example, this class gets passed to the options screen when it is created. However, given my small game with its few assets, I am already finding this solution cumbersome and inflexible. Changing anything would require looking to see if its already in the common factory, and if not, modifying existing factories and so on. And this is just considering textures currently, i didnt add sound files yet. I cant imagine bigger games with thousands of resources using this approach. A better idea must exist. Would someone please enlighten me?

    Read the article

  • What is the value of checking in failing unit tests?

    - by user20194
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Drupal migration failed

    - by Marco
    First of all, I'm new to Drupal and the work I have to do is some kind of too hard. My old colleague (webmaster) had a server with a multisite Drupal 6 installation. Sites and their dirs were (e.g.) Sites Site directory b.a.mycompany.com /drupal_install_dir/sites/b.a.mycompany.com c.a.mycompany.com /drupal_install_dir/sites/c.a.mycompany.com d.a.mycompany.com /drupal_install_dir/sites/d.a.mycompany.com Unluckily my colleague moved and server hdd aren't in my hands: all I have is a backup of /drupal_install_dir and three sql dumps (one for each site). I had to restore three sites, but changing them as z.mycompany.com/b z.mycompany.com/c z.mycompany.com/d Beeing a sysadmin, I Extracted tar.gz backup file under wwwroot (let's call full path to extracted directory /new_install_dir) Restored three databases Created mysql users and give them correct GRANTS on databases Then (trying to restore at least first site) I changed /new_install_dir/sites/settings.php putting correct database connection data and new basepath. But there is no way I can see my new site, simply it doesn't work. Watching /var/log/apache2/error.log I saw Drupal searching for main drupal database; so I created that db too setting user and grants, but dump file is empty. Well, now I can run something like install.php or update.php, but my site is not shown. Is there something I can do? Do I have to walk another way? Consider I searched the web, but I'm not able to find a guide that can help me for my problem. Ah, I forgot: before producing the backup, my colleague set site in maintenance mode. When I try to run z.mycompany.com/?q=user (trying to login) nothing happens. I'm really stuck...

    Read the article

  • How can I prevent JungleDisk/MacOS X (10.6) creating a local volume for a removed external drive?

    - by Rew
    Ok, here is situation: I use JungleDisk to sync an online folder on to a external drive connected to my Mac. If I right click Finder, click Go to Folder... then type /Volumes/ I see the drive linked here. Once I remove the external drive, an actual folder is created here in the name of the external drive, JungleDisk continues to copy files to this folder, rather than stop. Is this a feature of Mac OS X? Can I turn if off? After I re-connect my external drive, the link to the drive is appended with a 1 (so if I called the drive SpareDrive it becomes SpareDrive 1 as the newly created folder is called SpareDrive. I realise my explanation isn't very clear, but anyone understand this, and knows how to prevent it happening please let me know. PS: I have a low reputation as I don't use this often, I tend to use stackoverflow, but will check back here for answers.

    Read the article

  • REST API wrapper - class design for 'lite' object responses

    - by sasfrog
    I am writing a class library to serve as a managed .NET wrapper over a REST API. I'm very new to OOP, and this task is an ideal opportunity for me to learn some OOP concepts in a real-life situation that makes sense to me. Some of the key resources/objects that the API returns are returned with different levels of detail depending on whether the request is for a single instance, a list, or part of a "search all resources" response. This is obviously a good design for the REST API itself, so that full objects aren't returned (thus increasing the size of the response and therefore the time taken to respond) unless they're needed. So, to be clear: .../car/1234.json returns the full Car object for 1234, all its properties like colour, make, model, year, engine_size, etc. Let's call this full. .../cars.json returns a list of Car objects, but only with a subset of the properties returned by .../car/1234.json. Let's call this lite. ...search.json returns, among other things, a list of car objects, but with minimal properties (only ID, make and model). Let's call this lite-lite. I want to know what the pros and cons of each of the following possible designs are, and whether there is a better design that I haven't covered: Create a Car class that models the lite-lite properties, and then have each of the more detailed responses inherit and extend this class. Create separate CarFull, CarLite and CarLiteLite classes corresponding to each of the responses. Create a single Car class that contains (nullable?) properties for the full response, and create constructors for each of the responses which populate it to the extent possible (and maybe include a property that returns the response type from which the instance was created). I expect among other things there will be use cases for consumers of the wrapper where they will want to iterate through lists of Cars, regardless of which response type they were created from, such that the three response types can contribute to the same list. Happy to be pointed to good resources on this sort of thing, and/or even told the name of the concept I'm describing so I can better target my research.

    Read the article

  • What's the point of the Prototype design pattern?

    - by user1905391
    So I'm learning about design patterns in school. Many of them are silly little ideas, but nevertheless solve some recurring problems(singleton, adapters, asynchronous polling, ect). But today I was told about the so called 'Prototype' design pattern. I must be missing something, because I don't see any benefits from it. I've seen people online say it's faster than using "new"' but this is doesn't make any sense, since at some point, regardless how the new object is created, memory needs to be allocated for it ect. Furthermore, doesn't this pattern run in the same circles as the 'chicken or egg' problem? By this I mean, since the prototype pattern essentially is just cloning objects, at some point the original object must be created itself (ie, not cloned). So this would mean, that I would need to have an existing copy of every object that I would ever want to clone already ready to clone? Seems stupid to me. Can anyone explain what the use of this pattern is? Original post: http://stackoverflow.com/questions/13887704/whats-the-point-of-the-prototype-design-pattern

    Read the article

  • How to add a VM Server to a VM Domain when the host is not on that domain.

    - by Charlie
    I have created A VM running Windows Server 2008 R2, using VMWare. I have configured this as a domain controller running a Windows Server 2008 domain. I have also created another VM running Windows Server 2008 R2. The HOST machine is using Windows 7 Professional 64 bit. When I try to add the second VM into the domain that the first is the DC for it fails as the VM cannot contact the DC. Simple question really. What have I missed? Is it something to do with the configuration on the Host machine? What do I need to do to enable this scenario? Thanks

    Read the article

  • Dual boot windows 8 pro and windows 7 on XPS 8500 Special additon

    - by Jesse
    I am trying to install a dual boot with windows 7 premium and windows 8 Pro on an XPS 8500 special edition. I created a new primary partition on my C: drive, inserted the windows 8 install disk, and rebooted my computer from DVD. I select custom install and the dialog box saying where do you want to install windows at? pops up but none of my drives are listed. Please help me determine what is going on? I don't understand why none of my drives are showing up on this menu. Not even the original drive. When I go to load driver and click on the partition I created it tells me "No signed device drivers were found. Make sure the installation media contains the correct drivers, and then click OK."

    Read the article

  • Issue updating domain name servers from BlueHost to AWS

    - by cowls
    I am trying to migrate my site hosting from bluehost to AWS cloud based service. I have the site up and running on AWS with an elastic IP configured, it loads fine when I specify the IP address in the browser. I have gone into Route 53 on the AWS console and created a "hosted zone" for the domain. I then created a new record set of type "A" using the IP address as the value. I have a domain name registered with bluehost. Ive logged into the bluehost account and updated the domain name servers to point to those specified in Route 53 in the AWS console. When I hit the IP address directly the site loads, however it doesn't load when using the domain name (I get a google chrome oops error page saying page is not found) I've tried using this site: http://dns.squish.net/ to debug but it seems to be giving me the correct results. fizaclegems.com 300 IN A 107.20.209.78 Where 107.20.209.78 matches the elastic IP configured in the AWS console. This is the result it gives for all 4 name servers. Am I missing a step here? Does anyone know what else I should be doing or looking for?

    Read the article

  • SSH Tunnel doesn't work in China

    - by Martin
    Last year I was working in China for a few months. I never bothered setting up a real VPN, but just created a SSH tunnel, and changed my browsers proxy settings to connect through it. Everything worked great (except flash of course) but that was fine. However, now I'm back in China but I'm having problems with this approach. I do the same thing as last time, and according to https://ipcheckit.com/ my IP address is indeed the IP of my (private) server in the US, and I'm logging in to my server using a fingerprint I created long before going to China so no MITM should be possible. Furthermore the certificate from ipcheckit.com is from GeoTrust - so everything should be OK However, I still can't access sites which are blocked in China. Any idea how this could be possible?

    Read the article

  • Is it a good idea to dynamically position and size controls on a form or statically set them?

    - by CrystalBlue
    I've worked mostly with interface building tools such as xCode's Interface Builder and Visual Studio's environment to place forms and position them on screens. But I'm finding that with my latest project, placing controls on the form through a graphical interface is not going to work. This more has to do with the number of custom controls I have to create that I can't visually see before hand. When I first tackled this, I began to position all of my controls relative to the last ones that I created. Doing this had its own pros and cons. On the one hand, this gave me the opportunity to set one number (a margin for example) and when I changed the margin, the controls all sized correctly to one another (such as shortening controls in the center while keeping controls next to the margin the same). But this started to become a spiders-web of code that I knew wouldn't go very far before getting dangerous. Change one number and everything re sizes, but remove one control and you've created many more errors and size problems for all the other controls. It became more surgery then small changes to controls and layout. Is there a good way or maybe a preferred way to determine when I should be using relative or absolute positioning in forms?

    Read the article

  • Context Sensitive JTable (Part 2)

    - by Geertjan
    Now, having completed part 1, let's add a popup menu to the JTable. However, the menu item in the popup menu should invoke the same Action as invoked from the toolbar button created yesterday. Add this to the constructor created yesterday: Collection<? extends Action> stockActions =         Lookups.forPath("Actions/Stock").lookupAll(Action.class); for (Action action : stockActions) {     popupMenu.add(new JMenuItem(action)); } MouseListener popupListener = new PopupListener(); // Add the listener to the JTable: table.addMouseListener(popupListener); // Add the listener specifically to the header: table.getTableHeader().addMouseListener(popupListener); And here's the standard popup enablement code: private JPopupMenu popupMenu = new JPopupMenu(); class PopupListener extends MouseAdapter { @Override public void mousePressed(MouseEvent e) { showPopup(e); } @Override public void mouseReleased(MouseEvent e) { showPopup(e); } private void showPopup(MouseEvent e) { if (e.isPopupTrigger()) { popupMenu.show(e.getComponent(), e.getX(), e.getY()); } } }

    Read the article

  • mysqldump and wamp

    - by Adam
    I am running a wamp server and trying to use mysqldump to backup a mysql database I have. The following is the PHP code I am using to run mysqldump. exec("mysqldump backup -u$user -p$pass > $sql_file"); When I run the script the page just loads inifnately and the backup is not created. A blank file is being created so I know something is happening. Extra info: exec() is not disabled PHP is not running in safe mode Any ideas?? Win XP, WAMP, MYSQL 5.0.51b

    Read the article

  • Exceptions from automongobackup, yet script completes

    - by chakram88
    I am using automongobackup to, well, automate the backups of mongodb. output from the script (to STDERR) has the following exceptions (but the backup completes, and the dump files are created) ###### WARNING ###### STDERR written to during mongodump execution. The backup probably succeeded, as mongodump sometimes writes to STDERR, but you may wish to scan the error log below: exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: HostAndPort: bad port # exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed I know that the Host & Port are correct. If I run mongodump --host=127.0.0.1:27017 --journal (which is the effective command from automongobackup based on the options set and my reading of the src code) everything runs clean without any error reporting and the dump files are created as expected. Why would automongobackup report connection errors, even tho it does create the dump files, yet a straight call to mongodump does not? Debian 6.0 Lenny (from Linode image: Latest 3.2 (3.2.1-x86_64-linode23)) AutoMongoBackup VER 0.9 mongodb v 2.0.2

    Read the article

  • How do I create the "Gnome-Desktop-Item-Edit" program's launch icon with root privileges and more?

    - by GanZ
    I personally dont prefer running commands in terminal to achieve a task and prefer apps to execute the job. Creating launcher for apps is one such command where I prefer the gnome-desktop-item-edit application for creating launchers. If the gnome package is installed, just searching "create launcher" opens the app. But, it doesnt serve any purpose, because for starters the application cannot create launchers for various apps without root permission and the location where the apps have to be created. Usually the launcher apps with root permission can be created at /usr/share/applications and without root permission at /.local/share/applications. I dont prefer the latter location as it is vulnerable to deletion. Hence, in order to create the launchers through gnome with root, everytime I am forced to open this through terminal using the below command! $ sudo gnome-desktop-item-edit ~/.local/share/applications --create-new I dont want to open terminal everytime I want to create an application launcher on unity! I am able to lock the "Create Launcher" App in the Launcher, but not with root privileges So I want to be able to create the "Create Launcher" app shortcut on unity with default root privileges and for the app to create the launchers at usr/share/applications by default! Please help! P.S. I dont have enough rep points to add screenshots to help with the question!

    Read the article

  • 1 ASPX Page, Multiple Master Pages

    - by csmith18119
    So recently I had an ASPX page that could be visited by two different user types.  User type A would use Master Page 1 and user type B would use Master Page 2.  So I put together a proof of concept to see if it was possible to change the MasterPage in code.  I found a great article on the Microsoft ASP.net website. Specifying the Master Page Programmatically (C#) by Scott Mitchell So I created a MasterPage call Alternate.Master to act as a generic place holder.  I also created a Master1.Master and a Master2.Master.  The ASPX page, Default.aspx will use this MasterPage.  It will also use the Page_PreInit event to programmatically set the MasterPage.  1: protected void Page_PreInit(object sender, EventArgs e) { 2: var useMasterPage = Request.QueryString["use"]; 3: if (useMasterPage == "1") 4: MasterPageFile = "~/Master1.Master"; 5: else if (useMasterPage == "2") 6: MasterPageFile = "~/Master2.Master"; 7: }   In my Default.aspx page I have the following links in the markup: 1: <p> 2: <asp:HyperLink runat="server" ID="cmdMaster1" NavigateUrl="~/Default.aspx?use=1" Text="Use Master Page 1" /> 3: </p> 4: <p> 5: <asp:HyperLink runat="server" ID="cmdMaster2" NavigateUrl="~/Default.aspx?use=2" Text="Use Master Page 2" /> 6: </p> So the basic idea is when a user clicks the HyperLink to use Master Page 1, the default.aspx.cs code behind will set the property MasterPageFile to use Master1.Master.  The same goes with the link to use Master Page 2.  It worked like a charm!  To see the actual code, feel free to download a copy here: Project Name: Skyhook.MultipleMasterPagesWeb http://skyhookprojectviewer.codeplex.com

    Read the article

  • Cannot delete folder - Content seems to be nested recursively

    - by RikuXan
    I cannot delete a folder located on my hard disk by any means. I don't quite know how it was created, all I know is, that it is a pretty deep structure of folders (too deep to delete it at once, since Windows restriction path name too long), but the problem in the end is, that I can't "pull out" the inner folders, because they don't seem to be folders anymore (Context menu lacks things like "Properties", "Cut", "Copy", "Delete" etc.) Here a picture of how a right click looks like on one of these "folders": As you can see, the current folder is in very deep, but that is not the problem, rather the one I left-clicked on. Has anyone any advice on how to get rid of these? I tried a chkdsk, said no errors. I also tried deleting those folder via a VMWare Ubuntu, to no success. I also tried a batch file from a volunteer at MS boards, that should automatically de-nest such folders, but I guess mine is a special case, since the tool only created more such folders.

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >