Search Results

Search found 2447 results on 98 pages for 'automatic'.

Page 57/98 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Using Clojure instead of Python for scalability (multi core) reasons, good idea?

    - by Vandell
    After reading http://clojure.org/rationale and other performance comparisons between Clojure and many languages, I started to think that apart from ease of use, I shouldn't be coding in Python anymore, but in Clojure instead. Actually, I began to fill irresponsisble for not learning clojure seeing it's benefits. Does it make sense? Can't I make really efficient use of all cores using a more imperative language like Python, than a lisp dialect or other functional language? It seems that all the benefits of it come from using immutable data, can't I do just that in Python and have all the benefits? I once started to learn some Common Lisp, read and done almost all exercices from a book I borrowod from my university library (I found it to be pretty good, despite it's low popularity on Amazon). But, after a while, I got myself struggling to much to do some simple things. I think there's somethings that are more imperative in their nature, that makes it difficult to model those thins in a functional way, I guess. The thing is, is Python as powerful as Clojure for building applications that takes advantages of this new multi core future? Note that I don't think that using semaphores, lock mechanisms or other similar concurrency mechanism are good alternatives to Clojure 'automatic' parallelization.

    Read the article

  • svcbundle for easier SMF manifest creation

    - by Glynn Foster
    One of the new features we've introduced in Oracle Solaris 11.1 is a new utility called svcbundle. This utility allows for the easy creation of Service Management Facility (SMF) manifests and profiles, allowing you to take advantage of the benefits of automatic application restart without requiring you to have full knowledge of the XML file format that is used when integrating with the SMF. Integrating into SMF is one of the easiest and most obvious ways to take advantage of some of the mission critical aspects of the operating system, but many customers were often finding the initial learning curve of creating an XML manifest to be too hard. With the release of Oracle Solaris 11 11/11, SMF had a more integral part to play as more and more system configuration was starting to use the SMF configuration repository for the backend storage. This provides a number of aspects, including the ability to carefully manage customized administrative configuration, site specific configuration, and vendor provided configuration at different layers, helping to preserve them during system update. I've written a new article, Using svcbundle to Create SMF Manifests and Profiles in Oracle Solaris 11, to give you a feel for the help we can provide in converting your applications over to SMF, or doing some site wide configuration using profiles. This is the first pass at creating such a tool, so we'd love to hear feedback of your experiences using it.

    Read the article

  • Can't update packages or use Ubuntu Software Center (E: Some index files failed to download. They have been ignored, or old ones used instead.)

    - by user94189
    I get the following errors when trying to run the auto Update Manager Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: 'E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/ppa.launchpad.net_bugs-sehe_zfs-fuse_ubuntu_dists_precise_main_binary-amd64_Packages, E:The package lists or status file could not be parsed or opened.' When I run sudo apt-get update, I get these errors W: GPG error: http://us.archive.ubuntu.com precise Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: GPG error: http://ppa.launchpad.net precise Release: The following signatures were invalid: BADSIG 1FFD34C9EB13C954 Launchpad CoverGloobus W: GPG error: http://ppa.launchpad.net precise Release: The following signatures were invalid: BADSIG 4A67F94CBF965FF5 Launchpad PPA for Happy-Neko W: Failed to fetch http://ppa.launchpad.net/bugs-sehe/zfs-fuse/ubuntu/dists/precise/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/bugs-sehe/zfs-fuse/ubuntu/dists/precise/main/binary-amd64/Packages 404 Not Found W: Failed to fetch http://ppa.launchpad.net/bugs-sehe/zfs-fuse/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • PHP Image gallery that integrates well into custom CMS

    - by Thorarin
    I've been trying to find an image gallery that plays nice with our custom CMS. I've evaluated a number of them, but none of them seems to have the feature list that I would like: Run on LAMP environment Free software or low license costs (the website belongs to a non-profit organisation) Multi-user support Multiple albums. We're posting concert pictures, and would like an album per event. Pluggable authentication system. I want to reuse the accounts we have for our CMS. Permissions can be done inside the gallery itself, but I want to have a single sign on solution in a maintainable manner, by writing my own plugin/add-on for the software. Upload support (multiple images at the same time) And preferrably also: Can be integrated into a PHP page layout without IFRAMEs Automatic resizing of uploaded images to a maximum size Ability for visitors to place comments This combination is proving hard to find, especially the authentication requirement. I don't want to mess around all over the place in the source code to make it use the existing authentication. A plugin would be ideal, but alternatively a well thought out software design that allows for maintainable surgical changes would be acceptable. Any suggestions on which software I should take a closer look into?

    Read the article

  • Dedicated server: managed hosting or manage it myself?

    - by ddawber
    We're currently hosting a number of sites on a self-managed dedicated server. Some companies, however, offer a managed dedicated server hosting service. They offer: Roughly the same server spec Ticketing system support Managed daily backups Virtual firewall (but with a limit of 10 IP addresses allowed through at any one time) Now, this managed hosting is at extra expense - somewhere in the region of $500 per month, and the limit on the number of IP addresses they'll manage on the firewall is also a real pain. My thinking is it would be better and cheaper to Stay with the same host since the dedicated box is fine Get an Amazon AWS account and use their server to manage backups; there are a number of good tools that can be used to automate the process Configure iptables so that I have complete control of the firewall I want to know Is a managed virtual firewall likely to be more secure than me configuring iptables? Whether, in your opinion, it's best to let someone else take care of backups? If, from your experience, there's anything else i'm missing that warrants using managed hosting over a DIY service? I think there is some reluctance to not having managed hosting since a managed host in effect takes responsibility for your server, whereas any hardware or security issues with a server that we manage would mean we are forced to hold our hands up when a client site goes down. That said, I personally don't think a managed host does that much in the day to day running of your server (backups are automatic, OS updates are carried out with ease, etc.).

    Read the article

  • Datenbank in a Box

    - by A&C Redaktion
    Die Oracle Database Appliance: ein zuverlässiges, einfach zu bedienendes und erschwingliches Datenbank-System. Endlich kommt ein Datenbanksystem auf den Markt, das auf die Bedürfnisse kleinerer Unternehmen zugeschnitten ist: Oracle Database Appliance (ODA). Nicht jeder, der große Datenmengen zu verwalten hat, kann schließlich gleich zu Exadata und Co. greifen. Die kompakte „Datenbank in a Box“ kombiniert Software, Server und Speicherkapazität und bietet diverse Vernetzungsmöglichkeiten. Sie beinhaltet zwei geclusterte SunFire-Server, die unter Oracle Linux laufen, vorinstalliert ist eine Oracle Database 11g Release 2. Einer der großen ODA-Vorteile: Die Datenbank wächst mit den Bedürfnissen des Unternehmens: Die Leistungsfähigkeit des Clusters lässt sich anpassen, indem per "Pay-as-you-grow" Software-Lizensierung sukzessive zwei bis 24 Cores freigeschaltet werden können. Sie bietet außerdem hohe Verfügbarkeit für Eigen- und Standard-OLTP sowie universelle Datenbanken, auch in großer Anzahl. Für den Schutz vor Server- und Speichersystemausfällen sorgen Oracle Real Application Clusters, beziehungsweise Oracle Automatic Storage Management. Proaktive Systemüberwachung, Software-Bereitstellung auf einen Klick, integrierte Patches über den gesamten Stack und ein automatischer Call-Home bei Hardware-Ausfällen sparen Kosten und Ressourcen bei der Instandhaltung. Über das Oracle PartnerNetzwerk steht Kunden eine große Anzahl an branchenübergreifenden und -spezifischen Anwendungen zur Verfügung, die von der besseren Verfügbarkeit der Oracle Database Appliance profitieren. Auch die Fachpresse setzt sich mit der neuen Oracle Database Appliance auseinander: Ausführlich berichten unter anderem die Computerwoche und heise online. Das Admin-Magazin bietet eine kurze aber treffende Übersicht. Eine ebenfalls anschauliche, etwas ausführlichere Darstellung bietet die Webseite von DOAG e.V. Im Webcast zur Oracle Database Appliance geht Judson Althoff unter anderem auf deren Bedeutung für das Partner-Business ein:

    Read the article

  • Need advice concerning Feature Based Development when knowledge DB is involved

    - by voroninp
    We develop BackOffice application which is used to edit our knowledge DB. Now our main product's development team is shifting to the feature based development and we need to support several DB's with not identical data schemes. (DS changes slightly from DB to DB) The information from knowledge Db is extracted by the script and then is distributed to the clients. We also need to support merging these DB's. We now analyze pros and cons of different approaches. We discuss this one: One working DB (WDB) with one DB for each feature branch (FDB). The approved data is moved from WDB to FDB. So we need to support only one script for each branch. This script will extract data from corresponding FDB. Nevertheless we are to code the differences between FDBs and WDB manually. May be some automatic mapping tools exist? I also wish to know whether classic solutions to the alike problems already exist. Can anyone share the best practices for this case?

    Read the article

  • Blogger homepage won't update!

    - by Sims Siniron
    i am new on blogging webmaster Tools. When i usually add new post to my blog, it will automatic updated my homepage also. But from last 14th January, my homepage won't update by Google SEPR. As a result i am losing my popularity on SEPR. Previously when i post new article, 70-80% will go to the first page result. But after the problem occurs, none of them reach in top 15page of Google SEPR :( Last 1/12/12, Google webmaster sent me a "Notice of DMCA removal from Google Search" massage to indicates one of my URL that contains some infringing content which i deleted after receiving their notice. Not only that, i also cheeked all of my posts if there any additional infringing content available. After removing that, i fill out Google's Content Removed Notification form to notify them and Google also sent me a feedback that they received it and suggest "In the future, if you have removed the allegedly infringing content from your site (and won’t put it back), please use the correct form" which also i filled up. Now my question is that, Is everything alright which i did before? Although my new posts are indexed in GSEPR with ".." but why Google Robots.txt won't update my homepage which previously automatically updated when a new article was published.

    Read the article

  • Dedicated server: managed hosting or manage it myself?

    - by ddawber
    We're currently hosting a number of sites on a self-managed dedicated server. Some companies, however, offer a managed dedicated server hosting service. They offer: Roughly the same server spec Ticketing system support Managed daily backups Virtual firewall (but with a limit of 10 IP addresses allowed through at any one time) Now, this managed hosting is at extra expense - somewhere in the region of $500 per month, and the limit on the number of IP addresses they'll manage on the firewall is also a real pain. My thinking is it would be better and cheaper to Stay with the same host since the dedicated box is fine Get an Amazon AWS account and use their server to manage backups; there are a number of good tools that can be used to automate the process Configure iptables so that I have complete control of the firewall I want to know Is a managed virtual firewall likely to be more secure than me configuring iptables? Whether, in your opinion, it's best to let someone else take care of backups? If, from your experience, there's anything else i'm missing that warrants using managed hosting over a DIY service? I think there is some reluctance to not having managed hosting since a managed host in effect takes responsibility for your server, whereas any hardware or security issues with a server that we manage would mean we are forced to hold our hands up when a client site goes down. That said, I personally don't think a managed host does that much in the day to day running of your server (backups are automatic, OS updates are carried out with ease, etc.).

    Read the article

  • Xlib: extension "GLX" missing on display ":0". Asus k53s

    - by Steve
    I had everything working fine, I use a number of openGL graphics software, example pymol, mgltools, vmd, ballview, rasmol etc. All of these give the error: Xlib: extension "GLX" missing on display ":0". and fail to initialize. I have an i7 asus k53s with nvidia gforce I need these to do my work. I tried the umblebee fix, and just removing all nvidia drivers, or rolling back. I do not know why these were working then stopped, but did notice a new nvidia console in the mu which I assume was from enabling the automatic nvidia feeds, etc? I also played with the xorg, however have no clue what settings are valid. In addition, the display is 50% of the time not recognized now? It just gives a geeic 640x480 and I have to login and out 10 times to get it to return to a normal setting. When I try and set it manual, there is no other setting allowed from the settings menus, and the terminal changes just get re set every time I log out?

    Read the article

  • What's the canonical process for backing up a website?

    - by Walkerneo
    This is going to sound terrible, but bear with me. I currently have a cron job that does a mysql dump, a git add all and commit, and a git push to bitbucket. I set this up almost a year ago, when I didn't know much about git, backups, and general web development and administration. I haven't had the time to fix this and do it properly, but the repo has now grown quite big from accumulating large temporary files from my forum, so now I have to do something and I want to do it properly this time around. What processes do semi-large websites and personal site admins use for backing up server content? Based on what I've learned since I set this up, what I'm currently think of doing is: Making changes on a development domain and committing the code frequently Archiving the entire site after a successful deployment from the development domain Having automatic daily database and user-content backups. I still like the idea of backing up sqldumps with git, though. I know git isn't a backup tool and that this is beyond its purpose, but the textual queries that are exported would be easily managed by git and would save a lot of space in archives.

    Read the article

  • Packing up files on my machine, sending it to a server, and unpacking it

    - by MxyL
    I am implementing a feature in my application that sends all files in a specified folder to a server. I have the basic FTP transaction set up using Apache Commons FTPClient: it sets up a connection and transfers a file from one place to another. So I can simply loop over the directory and use this connection to transfer all the files. However, this could be better. Rather than transferring each file one by one, it makes more sense to pack it up in a compressed archive and then send the whole file at once. Saves time and bandwidth, since these are just text files so they compress nicely. So I would like to add automatic archive packing and unpacking. This is the workflow I have planned out, using zip compression: Zip all files in the folder Send the file over Unzip the files at its destination 1 and 2 are easy since the files are on the local machine, but I'm not sure how to accomplish the last step, when the files are now on a remote server. What are my options? I have control over what I can put and run on the server. Perhaps it is not necessary to do the packing/unpacking myself?

    Read the article

  • High Availability

    - by mattjgilbert
    Udi Dahan presented at the UK Connected Systems User Group last night. He discussed High Availability and pointed out that people often think this is purely an infrastructure challenge. However, the implications of system crashes, errors and resulting data loss need to be considered and managed by software developers. In addition a system should remain both highly reliable (backwardly compatible) and available during deployments and upgrades. The argument is that you cannot be considered highly available if your system is always down every time you upgrade. For our recent BizTalk 2009 upgrade we made use of our Business Continuity servers (note the name, rather than calling them Disaster Recovery servers ? ) to ensure our clients could continue to operate while we upgraded the Production BizTalk servers. Then we failed back to the newly built 2009 environment and rebuilt the BC servers. Of course, in the event of an actual disaster there was a window where either one or the other set were not available to take over – however, our Staging machines were already primed to switch to production settings, having been used for testing the upgrade in the first place.   While not perfect (the failover between environments was not automatic and without some minimal outage) planning the upgrade in this way meant BizTalk was online during the rebuild and upgrade project, we didn’t have to rush things to get back on-line and planning meant we were ready to be as available as we could be in the event of an actual disaster.

    Read the article

  • Randomly and uniquely iterating over a range

    - by Synetech
    Say you have a range of values (or anything else) and you want to iterate over the range and stop at some indeterminate point. Because the stopping value could be anywhere in the range, iterating sequentially is no good because it causes the early values to be accessed more often than later values (which is bad for things that wear out), and also because it reduces performance since it must traverse extra values. Randomly iterating is better because it will (on average) increase the hit-rate so that fewer values have to be accessed before finding the right one, and also distribute the accesses more evenly (again, on average). The problem is that the standard method of randomly jumping around will result in values being accessed multiple times, and has no automatic way of determining when each value has been checked and thus the whole range has been exhausted. One simplified and contrived solution could be to make a list of each value, pick one at random, then remove it. Each time through the loop, you pick one fromt he set of remaining items. Unfortunately this only works for small lists. As a (forced) example, say you are creating a game where the program tries to guess what number you picked and shows how many guess it took. The range is between 0-255 and instead of asking Is it 0? Is it 1? Is it 2?…, you have it guess randomly. You could create a list of 255 numbers, pick randomly and remove it. But what if the range was between 0-232? You can’t really create a 4-billion item list. I’ve seen a couple of implementations RNGs that are supposed to provide a uniform distribution, but none that area also supposed to be unique, i.e., no repeated values. So is there a practical way to randomly, and uniquely iterate over a range?

    Read the article

  • Why is my disk full?

    - by Agmenor
    I installed Ubuntu 12.04 by doing a fresh install where there was previously Ubuntu 11.10. My computer warns me now that my disk is nearly full. After having run apt-get purge, run apt-get autoremove and emptied the Trash can, I still have this problem as shown by this screenshot of Gparted: The disk /dev/sda7 is indeed full. I ran the Disk Usage Analyzer (Baobab) and I am still not sure of what is happening: One of my hypothesis is that when installing Ubuntu 12.04, I didn't configure my disks well and the disk /dev/sda6 is not mounted well as /home. Is this the reason indeed? What should I do to verify this and then to get the things fixed? Here are a few additional details to answer the questions I received (thank you everybody): My home directory is not encrypted. The Backup utility (Déjà Dup) is not set for automatic backups. (I do it myself and manually.) After I mount /dev/sda6, the command df -h gives Filesystem Size Used Avail Use% Mounted on /dev/sda7 244G 221G 12G 96% / udev 3,9G 4,0K 3,9G 1% /dev tmpfs 1,6G 904K 1,6G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,9G 164K 3,9G 1% /run/shm /dev/sda6 653G 189G 433G 31% /media/8ec2fa69-039b-4c52-ab1b-034d785132a1 (sorry but formatting this into code does not work, for an unknown reason) Thanks to izx's post, I realized /dev/sda6 was not even mounted before. It contains all the documents I used to have when I was running Ubuntu 11.10.

    Read the article

  • Game engine lib and editor

    - by luke
    I would like to know the best way/best practice to handle the following situation. Suppose the project you are working on is split in two sub-projects: game engine lib editor gui. Now, you have a method bool Method( const MethodParams &params ) that will be called during game-level initialization. So it is a method belonging to the game engine lib. Now, the parameters of this method, passed as a reference the structure MethodParams can be decided via the editor, in the level design phase. Suppose the structure is the following: enum Enum1 { E1_VAL1, E1_VAL2, }; enum Enum2 { E2_VAL1, E2_VAL2, E2_VAL3, }; struct MethodParams { float value; Enum1 e1; Enum2 e2; // some other member } The editor should present a dialog that will let the user set the MethodParams struct. A text control for the field value. Furthermore, the editor needs to let the user set the fields e1 and e2 using, for example, two combo boxes (a combo box is a window control that has a list of choices). Obviously, every enum should be mapped to a string, so the user can make an informed selection (i have used E1_VAL1 etc.., but normally the enum would be more meaningful). One could even want to map every enum to a string more informative (E1_VAL1 to "Image union algorithm", E1_VAL2 to "Image intersection algorithm" and so on...). The editor will include all the relevant game egine lib files (.h etc...), but this mapping is not automatic and i am confused on how to handle it in a way that, if in future i add E1_VAL3 and E1_VAL4, the code change will be minimal.

    Read the article

  • Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization

    - by ShawnBailey
    There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it. The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials. DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml. The contents are fairly short and at the bottom you will find the following entry: <dumpConfiguration>     <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/> </dumpConfiguration> The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format. You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.

    Read the article

  • Is text-only mode a saving or a problem for battery savings?

    - by Robottinosino
    A friend is flying to the US from Europe and asked me a very thought-provoking question, which I am not remotely able to answer with substance so I am asking it here: How to absolutely maximise battery life on an Ubuntu (laptop) install? do not rush to mark this as duplicate, there is an important point here: does -GNOME- help or worsen battery life? Let me provide some context: The only task he needs to perform is: edit text files in Vim. He is unsure whether running GNOME will drain his battery life more or actually save him some battery life given the smarts of GNOME's power management features like "switch this peripheral to -power save- after X minutes..." (GNOME might just be a configuration front-end for settings that are governed by command-line utils for all I know?) He could perfectly well boot the system in text-only mode and use the automatic 6 virtual consoles for his needs, if that's a saving at all over running tmux (I think so because of all the smart buffering/history/etc the latter does by default?) Exactly how would you advise him to run his laptop during his flight? What I told him already: power off WiFi in the BIOS, not from the "GUI" power off Bluetooth switch off the courtesy light and use low monitor brightness play music off of his phone, not mp3blaster do not use his tiny portable mouse (and do not attach any other USB gimmicks like "screen light", etc) stop development services he will not be using, especially apache2, tomcat, dovecot, postgresql, etc. Potentially: - switch off his cron jobs? (he does an rsync + tar + 7za of his "work in progress" every so often) I think the above is standard stuff one could get off StackExchange, and with many duplicates... the core of this question is, I think: __ will running Ubuntu in text-only mode be a saving in terms of battery life or a problem? why? (provide some technical arguments) __ I think it will be a saving but I am also scared about "other things" detecting and enabling advanced chipset power management features only when some services are started.. and fear these "services" may be off in text-only mode?

    Read the article

  • raid advice with SSD and two HDD

    - by Nin
    I have a new machine with one 128GB SSD and two 1TB HDD. On the SSD is the OS and my initial thought was to put the two HDD in RAID 1 for user data. After some more thought I came up with two other setups and now I'm in doubt :) Can someone advise what would be the best setup? 1: single SSD and HDD in RAID 1 (original thought) 2: Create 2 partitions on the HDD (128GB and 872GB). Put the two 872GB in RAID 1 and create another RAID 1 with the SSD and one 128GB HDD partition. 3: Create 2 partitions on the HDD (750/250), put the 705GB in RAID 1 and use the 2 250GB as backup and make automatic snapshots of the SSD to (one of) these partitions. I think the 2 main questions are: Is it advisable to create a raid array with only part of a drive and actively use the other part of that drive or should you always use the full disk? Is it advisable to create a raid 1 array with a SSD and HDD or will that blow the whole speed advantage of the SSD?

    Read the article

  • I need help with Grub and restoring Windows?

    - by Bob Tahog
    I started out with Windows XP and then I installed Zorin (a sub distro of Ubuntu) and then I installed Ubuntu. This was working great. Then I installed Windows 8 on yet another partition and couldn't get into my other OSs. I asked my tech teacher at school how to fix it and she said just clear the partition that I installed Windows 8 on, so I booted onto a live version of Ubuntu and cleared the Windows 8 partition. Okay then I rebooted and it still went into Windows 8 for some reason. So I got back onto live Ubuntu and it turns out Windows 8 partition didn't clear for some reason so I did it again (and I'm positive it was the Windows 8 partition). I still couldn't fix grub but I needed something out of my XP partition so I mounted it on the live Ubuntu and now all the XP partition have are the folders 'Boot', 'Recovery', 'System Volume Information', 'temp' and the files 'bootmgr', 'BOOTNXT', 'BOOTSECT.BAK' and 'Recovery.txt'. Anybody know how to fix this or what I did wrong? Also, if I try booting from my hard drive it shows the Windows and says 'preparing automatic repair' then 'Diagnosing your PC' then restarts. Any and all help is greatly appreciated.

    Read the article

  • Facebook contest policy no-no?

    - by Fred
    I would like to post a link on a Facebook page where it will exit Facebook entirely and go to a client's website, where people will be on a page (client's) where they can enter their e-mail address to be entered in a temporary database file with rules and disclosures etc., for a draw once the number of entries reaches 100 for instance. Once the number of entries reaches 100, a random winner is picked and notified via E-mail. The functionality is as follows: A link is place on a Facebook page leading to an external page The page is a form to merely enter their email address for a contest The email is placed in a temporary file An automatic E-mail is sent to the address used for confirmation using SHAH-256 hash The person receives the Email saying something to the affect "Please confirm your Email address etc. - If you did not authorize this, simply ignore this message and no further action will be taken". If the person clicks on the confirmation link, the Email is then stored in the database and the person is again notified saying "Thank you for signing up etc." Once others do the same process and the database reaches a certain number, the form is no longer accessible and automatically picks a random Email. Once picked, an Email is automatically sent to the winner stating the instructions, and notifying me also. Once that person clicks yet another confirmation link, the database is then automatically deleted. I have built this myself and have no intentions of breaking any rules, nor jeopardize the work/time/energy I have put into this project. Is this allowed?

    Read the article

  • Garage Sale Code &ndash; Everything must go!

    - by mbcrump
    Garage Sale Code     The term “Garage Sale Code” came from a post by Scott Hanselman. He defines Garage Sale Code as: Complete – It’s a whole library or application. Concise – It does one discrete thing. Clear – It’ll work when you get it. Cheap – It’s free or < 25 cents. (Quite Possibly) Crap – As with a Garage Sale, you’ll never know until you get it home if it’s useless. With the code I’ve posted here, you’ll get all 5 of those things (with an emphasis on crap). All of the projects listed below are available on CodePlex with full source code and executables (for those that just want to run it).  I plan on keeping this page updated when I complete projects that benefit the community.  You can always find this page again by swinging by http://garagesale.michaelcrump.net or you can keep on driving and find another sale. Name Description Language/Technology Used WPF Alphabet WPF Alphabet is a application that I created to help my child learn the alphabet. It displays each letter and pronounces it using speech synthesis. It was developed using WPF and c# in about 3 hours (so its kinda rough). C#, WPF Windows 7 Playlist Generator This program allows you to quickly create wvx video playlist for Windows Media Center. This functionality is not included in WMC and is useful if you want to play video files back to back without selecting the next file. It is also useful to queue up video files to keep children occupied! C#, WinForms Windows 7 Automatic Playlist Creator This application is designed to create W7MC playlist automatically whenever you want. You can select if you want the playlist sorted Alphabetical, by Creation Date or Random. C#, WinForms, Console Generator Twitter Message for Live Writer This is a plug-in for Windows Live Writer that generates a twitter message with your blog post name and a TinyUrl link to the blog post. It will do all of this automatically after you publish your post. C#, LiveWriter API

    Read the article

  • So You Want To Build a SPARC Cloud

    - by user12601629
    Did you ever wish you could get the industrial strength power of UNIX/RISC with the flexibility of cloud computing?  Well, now you can!  With recent advances from Oracle it's possible to build an incredibly high-performance, flexible, available virtualized infrastructure based on Solaris and SPARC.  Here's the recipe! Authored in collaboration across the Oracle "Systems Group" team, we now have a complete best practice guide for you.  Click below to download it: Best Practices for Building a Virtualized SPARC Computing Environment Inside you'll find recommendations for how and when to leverage technologies like: SPARC T4 OVM for SPARC hypervisor (version 2.2 and newer) Solaris 11 Ops Center 12c ZFS Storage Appliance Oracle network switches Based on following these best practices, you'll be able to construct a dynamic, virtualized infrastructure that allows for: Easy, GUI-based provisioning on new VMs Automated HA failover in the event of physical server failures Automatic load balancing across a cluster of VM hosts Complete end-to-end monitoring You should download this paper and check it out.  Even if you aren't planning on buying all new hardware, and instead want to transform some existing gear into a dynamic virtualized environment then this paper will give you concrete info on what to do and the trade-offs you'll make. Have fun getting started on your journey to build a SPARC cloud!

    Read the article

  • "Default approach" when creating a class from scratch: getters for everything, or limited access?

    - by Prog
    Until recently I always had getters (and sometimes setters but not always) for all the fields in my class. It was my 'default': very automatic and I never doubted it. However recently some discussions on this site made me realize maybe it's not the best approach. When you create a class, you often don't know exactly how it's going to be used in the future by other classes. So in that sense, it's good to have getters and setter for all of the fields in the class. So other classes could use it in the future any way they want. Allowing this flexibility doesn't require you to over engineer anything, only to provide getters. However some would say it's better to limit the access to a class, and only allow access to certain fields, while other fields stay completely private. What is your 'default' approach when building a class from scratch? Do you make getters for all the fields? Or do you always choose selectively which fields to expose through a getter and which to keep completely private?

    Read the article

  • Do there exist programming languages where a variable can truly know its own name?

    - by Job
    In PHP and Python one can iterate over the local variables and, if there is only once choice where the value matches, you could say that you know what the variable's name is, but this does not always work. Machine code does not have variable names. C compiles to assembly and does not have any native reflection capabilities, so it would not know it's name. (Edit: per Anton's answer the pre-processor can know the variable's name). Do there exist programming languages where a variable would know it's name? It gets tricky if you do something like b = a and b does not become a copy of a but a reference to the same place. EDIT: Why in the world would you want this? I can think of one example: error checking that can survive automatic refactoring. Consider this C# snippet: private void CheckEnumStr(string paramName, string paramValue) { if (paramName != "pony" && paramName != "horse") { string exceptionMessage = String.Format( "Unexpected value '{0}' of the parameter named '{1}'.", paramValue, paramName); throw new ArgumentException(exceptionMessage); } } ... CheckEnumStr("a", a); // Var 'a' does not know its name - this will not survive naive auto-refactoring There are other libraries provided by Microsoft and others that allow to check for errors (sorry the names have escaped me). I have seen one library which with the help of closures/lambdas can accomplish error checking that can survive refactoring, but it does not feel idiomatic. This would be one reason why I might want a language where a variable knows its name.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >