Search Results

Search found 41338 results on 1654 pages for 'used'.

Page 497/1654 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • Build graph of dependencies (calls) in javascript [on hold]

    - by Maximus
    I'm new to a project and I see that everything is so interwoven that small changes here makes stuff break there. I'd like to refactor it and separate into modules. For that I'm going to need a tool that can build a graph of dependencies (calls) to visualize the connections. There are many tools like that for languages like C#, but I've found little information about the available tools for JavaScript. Has anyone done something like this? What tools have you used?

    Read the article

  • Hybrid Columnar Compression

    - by user12620172
    You heard me in the past talk about the HCC feature for Oracle databases. Hybrid Columnar Compression is a fantastic, built-in, free feature of Oracle 11Gr2. One used to need an Exadata to make use of it. However, last October, Oracle opened it up and now allows it to work on ANY Oracle DB server running 11Gr2, as long as the storage behind it is a ZFSSA for DNFS, or an Axiom for FC. If you're not sure why this is so cool or what HCC can do for your Oracle database, please check out this presentation. In it, Art will explain HCC, show you what it does, and give you a great idea why it's such a game-changer for those holding lots of historical DB data. Did I mention it's free? Click here: http://hcc.zanghosting.com/hcc-demo-swf.html

    Read the article

  • DHCP won't start / subnetting

    - by user114371
    I recently changed the IP address on an Ubuntu 12.04 server I have in my lab, which is running isc-dhcp-server. After doing so and modifying the dhcpd.conf file, my dhcp service would not start. I basically used the same configuration, except I modified everything to use /25 scopes rather than /24. When I try to start / restart the service, I see the following: MY@ubuntuserver:~$ sudo service isc-dhcp-server restart stop: Unknown instance: isc-dhcp-server start/running, process 20918 It looks like it starts, but it isn't actually running and Webmin states that the DHCP service is not running. So my question is, does isc-dhcp-server support subnetting (CIDR) style scopes, or must they be class A / B / C scopes (doesn't seem likely)? I've double checked the interface reference (this is a VM with only one defined eth0 interface) and everything else I can thing of.

    Read the article

  • How to change the state of a singleton in runtime

    - by user34401
    Consider I am going to write a simple file based logger AppLogger to be used in my apps, ideally it should be a singleton so I can call it via public class AppLogger { public static String file = ".."; public void logToFile() { // Write to file } public static log(String s) { AppLogger.getInstance().logToFile(s); } } And to use it AppLogger::log("This is a log statement"); The problem is, what is the best time I should provide the value of file since it is a just a singleton? Or how to refactor the above code (or skip using singleton) so I can customize the log file path? (Assume I don't need to write to multiple at the same time) p.s. I know I can use library e.g. log4j, but consider it is just a design question, how to refactor the code above?

    Read the article

  • dpkg reports error on package icaclient

    - by Photonics1
    I installed the icaclient (it's a client for Citrix) a while back. I don't exactly remember what I did to get it working but it was enervating. I had to install some old packages not even avaiable for ubuntu (12.04) and in the end I used some stuff from old rpms. Anyway the client is more or less working now but I always get a dpkg error when installing or updating something. The (translated) error message is something like: dpkg: Error while processing icaclient:i386 (--configure): subprocess installed post-installation-script return errorcode 2 I just want to tell dpkg to ignore this or remove this post-install-script but I don't know how. Thanks!

    Read the article

  • Can't install ATI proprietary drivers in 12.10

    - by EApubs
    I have a laptop with ATI Radeon 6770M HD Hybrid graphics card. In Ubuntu 12.04, I installed the fglrx driver through "additional drivers" and it worked. (I can even switch GPUs). But in the new Ubuntu 12.10, after installing, Unity won't load. Only the mouse and the wallpaper. If I initialize the settings sudo aticonfig --initial then after rebooting it gives a warning saying I'm in low graphics mode! How to fix this? PS : Earlier i used software source to install the drivers. But when using the terminal, I got this warning : update-alternatives: warning: forcing reinstallation of alternative /usr/lib/fglrx/ld.so.conf because link group x86_64-linux-gnu_gl_conf is broken Update : Filed a bug report in launchpad : https://bugs.launchpad.net/fglrx/+bug/1068661

    Read the article

  • Ubuntu 12.04 Battery problem

    - by Rahul
    I have my laptop ( Dell Inspiron 14R) on dual boot with Ubuntu 12.04 and Windows 7. While using Windows 7, I used to get about 2.5-3 hours on battery fully charged (with my cellphone connected for 3g Internet). The battery is almost 2 years old. But in Ubuntu 12.04, I'm getting only about 1-1.5 hrs on battery. I tried installing jupiter, also tried some tweaks that were mentioned in the forums, like editing the grub file. But nothing seems to work. I am not sure if any app is draining the battery. The one application that I use always is firefox. It's always open. Is there any way I could get at least 2.5 hrs of battery time?

    Read the article

  • disk not accessible

    - by user107044
    i formatted my hard drive yesterday and it was working well even after the formatting. But when I restarted my system again , is is showing that the space is alloted to my files but they are inaccessible. I have even tried to unhide the files and folders, if they got hidden somehow. But nothing works. the hard drive is being shown empty but the properties are saying that it still conatins the data : http://imgur.com/ObjTE in the image, it is showing that the directory has only 1 file of size:4.8 kbps but the space being used by the drive is 11.6 GB. do suggest some solution.

    Read the article

  • Simplicity-effecincy tradeoff

    - by sarepta
    The CTO called to inform me of a new project and in the process told me that my code is weird. He explained that my colleagues find it difficult to understand due to the overly complex, often new concepts and technologies used, which they are not familiar with. He asked me to maintain a simple code base and to think of the others that will inherit my changes. I've put considerable time into mastering LINQ and thread-safe coding. However, others don't seem to care nor are impressed by anything other than their paycheck. Do I have to keep it simple (stupid), just because others are not familiar with best practices and efficient coding? Or should I continue to do what I find best and write code my way?

    Read the article

  • How can I make Google show unit conversions by default?

    - by bUbUKid
    When I search for "4 inches in g" on my Windows Firefox I immediately get a unit conversion done by Google that shows up before the actual search results. On my Ubuntu 12.04 system this does not work though. I tried Firefox and Chromium and have no script blockers installed. I also switched off AdBlock Plus for testing but to no avail. I realize that this is not really Ubuntu doing something wrong but: Are there any settings I can modify to make Google show these results? I use them quite frequently and I believe (though I cannot test it anymore) that this used to work on my last Ubuntu System. Maybe there are some script sources that Ubuntu has disabled by default or something like that?

    Read the article

  • php file upload problem [closed]

    - by newcomer
    This code works properly in my localhost. I am using xampp 1.7.3. but when I put it in the live server it shows Possible file upload attack!. 'upload/' is the folder under 'public_html' folder on the server. I can upload files via other script in that directory. <?php $uploaddir = '/upload/';//I used C:/xampp/htdocs/upload/ in localhost. is it correct here? $uploadfile = $uploaddir . basename($_FILES['file_0']['name']); echo '<pre>'; if (move_uploaded_file($_FILES['file_0']['tmp_name'], $uploadfile)) { echo "File is valid, and was successfully uploaded.\\n"; } else { echo "Possible file upload attack!\\n"; } echo 'Here is some more debugging info:'; print_r($_FILES); print "</pre>"; ?>

    Read the article

  • SEO perspective on non existent directory base in URL?

    - by Sandro Dzneladze
    I'm wondering if there will be any SEO/readability/memorability benefit to using this kind of URL structure for my upcoming project: www.moviereviews.com/movie/name? Considering that /movie is not a real directory. So that page doesn't exist. Something similar to wordpress /category/ base that is used purely for content separation on the site. What do you think? For user it will be beneficial, if domain doesn't signal what content is about my extra dir will tell what it is about. Correct? But from SEO perspective?

    Read the article

  • problem in installing binutils

    - by user3667930
    when am trying to install mspgcc on ubuntu 14.04 version am getting an error at "make" during installation of binutils... following are the commands i used.. sir please help me in fixing this error.Thanks in advance.. wget http://ftpmirror.gnu.org/binutils/binutils-2.21.1a.tar.bz2 tar xvfj binutils-2.21.1a.tar.bz2 cd binutils-2.21.1 patch -p1 < ../mspgcc-20120406/msp430-binutils-2.21.1a-20120406.patch cd .. mkdir -p BUILD/binutils cd BUILD/binutils ../../binutils-2.21.1/configure --target=msp430 --program-prefix="msp430-" --with-mpfr-include=/usr/local/include -with-mpfr-lib=/usr/local/lib --with-gmp-include=/usr/local/include -with-gmp-lib=/usr/local/lib --with-mpc-include=/usr/local/include -with-mpc-lib=/usr/local/lib make -j 4 sudo make install cd ../..

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • Google is still crawling and indexing my old, dummy, test pages which now are 404 not found

    - by Ace
    I have set up my site with sample pages and data (lorem ipsum, etc..) and Google has crawled these pages. I deleted all these pages and actually added real content but in webmaster tools, i still get a lot of 404 errors Google trying to crawl these pages. I have set them to "mark as resolved" but some pages still come back as 404. Furthermore, I have a lot of these sample pages still listed when i do a search of my site on Google. How to remove them. I think these irrelevant pages are hurting my rating. I actually wanted to erase all these pages and start getting my site being being indexed as a new one but I read it's not possible? (I have submitted a sitemap and used "Fetch as Google.")

    Read the article

  • Programming Interview : How to debug a program?

    - by Jake
    I was recently asked the following question in an interview : How do you debug a C++ program ? I started by explaining that programs may have syntax and semantic errors. Compiler reports the syntax errors which can be corrected. For semantic errors, various debuggers are available. I specifically talked about gdb, which is command line, and Visual Studio IDE's debugger, which has a GUI, and common commands. I also talked about debug and release version of code, how assertions should be used for debug build, how exceptions helps in automatic cleanup & putting the program in valid state, and how logging can be useful (e.g. using std::clog). I want to know if this answer is complete or not. Also, I want to hear how other people will go about answering this question in a structured manner ? Thanks.

    Read the article

  • Why does Files (Nautilus) stopped updating partition's bookmarks?

    - by YuriC
    I've upgraded from 13.04 to 13.10 and noticed that Files (Nautilus) stopped updating my bookmarks that are located in another partition (an ext4 one). It used to work before. Testing, I've found out that, if I add any new bookmark (using CTRL + D, for example), Files then adds this new one and updates all bookmarks, showing that ones that point to my partition. I conclude that the feature (updating bookmarks) works, but it's not being executed when I mount my partition clicking on it. Any hints on how to solve this? Bookmarks really speed up everyday activities.

    Read the article

  • Syncing objects to a remote server, and caching on local storage

    - by Harry
    What's the best method of sycing objects (as JSON) to a remote server, with local caching? I have some objects that will pretty much just be plain-text with some extra meta-data. I was thinking of perhaps including a "last modified date" for both Local storage and Remote storage. This could then be used to determine which object is the most recent. For example, even though objects will be saved to both local and remote when they are saved, sometimes the user may not have internet access, or the server may be down, or any other number of things. In this case, the last modified date for remote storage would be reverted to its previous date. Local storage would remain as it is. At this point, the user could exit the application, and when they reload the application would then look at the last modified dates of the local and remote storages, and decide. Is there anything I'm missing with this? Is there a better method that I could use?

    Read the article

  • System checks for disk drive error every time it boots

    - by Starx
    When my disk space for the ubuntu installation partition was getting low, from a live cd, I used gparted to increase its volume capacity, but deleting another partition and merging it to the ubuntu partition. Since then onwards, I am receiving disk checking for errors at boot screen for my partitions, always. What seem to be causing this and how to fix it? Update Here is my boot.log if it provides few insight fsck from util-linux 2.19.1 fsck from util-linux 2.19.1 /dev/sda1 was not cleanly unmounted, check forced. ubuntu: clean, 501325/1310720 files, 2958455/5242880 blocks /dev/sda1: 241/51272 files (3.3% non-contiguous), 73541/102400 blocks mountall: fsck /boot [358] terminated with status 1 Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox ... /dev/sda1 is a separate grub partition for my dual OS's

    Read the article

  • Run a script with user interaction on log out / shutdown?

    - by lumbric
    I'd like to run a script on shut down, which interrupts the logout process and and pops up a window with zenity. My target is to get autofsck working with lightdm. It seems to work with 12.04 after installing the old *.deb file, if one runs the check script manually. In order to use it, it should run automatically on log out and ask the user if she/he wants to check the disc on shutdown. There is the option session-cleanup-script in the file /etc/lightdm/lightdm.conf which seems to work, if a bash file with full path is used (I can't place the command directly there). But if I press shutdown, there is no time for a user choice. Is there any other option to solve this problem?

    Read the article

  • How can I upgrade from Ubuntu 9.10 to 11.10?

    - by Chinnu
    We need to program in CUDA 5.0 which can be installed only on ubuntu 11.10 or 12.04. Our current version, 9.10, is no longer supported, so we chose to proceed with a clean installation. Since we have a shared workstation, we used clonezilla for cloning the system. However, booting from the LiveCD showed an unexpected error. We also tried to install 11.10 in an external HDD by partitioning it, but Gparted could not be installed, and terminated with the error "installArchives() failed" which we couldn't solve even after modifying the sources.list. Is there a way to proceed with this upgrade?

    Read the article

  • Asset Discovery Video

    - by Owen Allen
    A while back, I mentioned that we'd started putting together videos that explain some aspects of Ops Center. (The first one I talked about shows you how to create a server pool.) Well, there's another video that I wanted to show you; this one is about discovering assets. There are a few different tools you can use to discover assets in Ops Center, each one appropriate for different types of assets or different environmental needs. Salvador put together this video that walks you through the options in the Add Assets wizard, explaining when each option is used and how to use them: &amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; We're adding more videos as we go, so if there's something else you'd like to see explained in video form, let me know.

    Read the article

  • Resize hard drive partition to make more space for /var

    - by user3357381
    I am running out of space in the /var partition. I have plenty of space in my /home partition. How do I shrink the /home partition to make more space for the /var partition? I have read some blogs that say to use the GParted Live CD. As a new user, I'm not quite sure if this is the ideal route. What is the best way to create more space for /var ? Output of df -h : Filesystem Size Used Avail Use% Mounted on /dev/sda2 19G 7.1G 11G 41% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 7.9G 8.0K 7.9G 1% /dev tmpfs 1.6G 1.5M 1.6G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.9G 624K 7.9G 1% /run/shm none 100M 60K 100M 1% /run/user /dev/sda4 454M 75M 352M 18% /boot /dev/sda5 2.3G 2.1G 36M 99% /var /dev/sda3 178G 1.3G 168G 1% /home /dev/sda6 2.8G 5.8M 2.6G 1% /tmp /dev/sdb1 3.7T 401G 3.3T 11% /hdd

    Read the article

  • Exadata - Following up on customer deployments

    - by Carlos M. Orozco -Oracle
    Over the last year or so I've been visiting customers who have had Exadata deployed and have been enjoying the benefits the platform has been providing. Benefits include greater performance, consolidating multiple databases, data compression and time to value improvements. Most often I hear my reports run faster. One hospitality company report times that used to take 3 hrs now run in 12 seconds. Another services company reported all their batch reports taking 11hrs now run in 38 mins. Also reported that their transactions post faster, and batch updates run faster. So what does that mean? For most of them it means that now they have a platform that can handle growth. Most are growing 15% organically, but I've also seen 40% growth thru acquisition. Exadata has been keeping up with the additional data demand by customers leveraging compression and the smart storage features.

    Read the article

  • IE 9 RC maybe possible to release on 10 February

    - by anirudha
    this is not a exclamatory we all know about that they always postponed their time for product release. I not know what is means of it. maybe it’s trick microsoft use to make their software popular. but sometime it’s give bad impression to user. On 2009 Microsoft put a widget [ countdown ] widget for launching Visual studio 2010. who used by many MSDN blogger. Somasagar are one of them who put the widget on their blog that show “How much time after Visual studio goes released”. but after post ponding the date I not know where widget was gone. site are down who provide the widget. they use same trick they postponed their  date 20 march to 12 April to release the Visual studio. well wait something more and next time never  believe that it’s really gone to release on certain date they show you on blog.

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >