Search Results

Search found 19017 results on 761 pages for 'purchase order'.

Page 363/761 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • Error in using DBAN on HDD

    - by John Watson
    I am using DBAN to erase HDD. DBAN is loaded from a CD and BIOS Boot order has been set to favour CD drive. On starting laptop, system boots from CD and DBAN interface can be seen. DBAN detects two storage devices, HDD and the SD Card. My HDD IS 320GB but DBAN says 298GB. It erases the SD card but when i try to erase HDD, it gives following error. DBAN finished with non-fatal errors. *ERROR /dev/sdb (process crash) *ERROR /dev/sda (process crash)

    Read the article

  • No win 7 users available for login after dell datasafe factory reset

    - by user897052
    I created install discs using the Dell datasafe 2.0 backup utility in order to re-install windows on a friend's laptop (dell inspiron n5110). I ran the discs to do a factory reset. after the whole process, it booted, started loading windows 7, displayed the messages "setup is preparing your computer for first use" and "setup is checking video performance," and showed the login screen. However, there don't seem to be any active users on the machine - I opened a command prompt window to check the users on the machine. Using the command prompt (again, from the login window), i activated/enabled the administrator account, and even created another admin account, and upon logging in received several errors, couldn't load any mmc's, etc. any help would be appreciated.

    Read the article

  • #altnetseattle &ndash; CQRS

    - by GeekAgilistMercenary
    This is a topic I know nothing about, and thus, may be supremely disparate notes.  Have fun translating.  : )   . . .and coolness that the session is well past capacity. Separates things form the UI and everything that needs populated is done through commands.  The domain and reports have separate storage. Events populate these stores of data, such as "sold event". What it looks like, is that the domain controls the requests by event, which would be a product order or something similar. Event sourcing is a key element of the logic. DDD (Domain Driven Design) is part of the core basis for this methodology/structure. The architecture/methodology/structure is perfect for blade style plugin hardware as needed. Good blog entry DDDD: Why I love CQRS and another Command and Query Responsibility Segregation (CQRS), more, CQRS à la Greg Young, a bit by Udi Dahan and there are more.  Google, Bing, etc are there for a reason. It appears the core underpinning architectural element of this is the break out of unique identifiable actions, or I suppose better described as events.  Those events then act upon specific pipelines such as read requests, write requests, etc.  I will be doing more research on this topic and will have something written up shortly.  At this time it seems like nothing new, just a large architectural break out of identifiable needs of the entire enterprise system.  The reporting is in one segment of the architecture, the domain is in another, hydration broken out to interfaces, and events are executed to incur events on the Reports, or what appears by the description to be events on the domain. Anyway, more to come on this later.

    Read the article

  • apache virtual host concept and dns

    - by Subhransu
    I want to have around 60 repositories of projects and I want to serve them from a dedicated remote server(ubuntu) with the help of mercurial server so that all my developers will be able to update their changes. I have followed this article in order to do that but stocked in the Apache Configuration Step (section 2.5 2.5.4). I have some following questions: What are the steps I need to follow to make apache to serve /home/hg/repositories/private/hgweb.cgi when I enter dev.example.com/private ? Is my virtual host file is correct or do I need to change anything ? I bought the example.com and how to make it to serve dev.example.com/private. Do I need to add A name(like : subdomain.example.com and then IP of my server) in the cpannel of hosting company? ServerAdmin webmaster@localhost ServerName dev.example.com ScriptAlias /private /home/hg/repositories/private/hgweb.cgi <Directory /home/hg/repositories/private/> Options ExecCGI FollowSymlinks AddHandler cgi-script .cgi DirectoryIndex hgweb.cgi AuthType Basic AuthName "Mercurial repositories" AuthUserFile /home/hg/tools/hgusers Require valid-user </Directory> ErrorLog ${APACHE_LOG_DIR}/dev.example.com_error.log # Possible values include: debug, info, notice, warn, error, cr$ # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/dev.example.com_ssl_access.lo$ SSLEngine on SSLCertificateFile "/etc/apache2/ssl/dev.example.com.crt" SSLCertificateKeyFile "/etc/apache2/ssl/dev.example.com.k$ NOTE: The above is my virtual host file. I have not enabled the site yet and also not changed any host or hostname or httpd.conf file.

    Read the article

  • Change from IDE to AHCI after installing Windows 8

    - by Louis
    I had my drive controller configured for IDE when I installed Windows 7. This didn't change when I upgraded to Windows 8. I now need to enable AHCI, but doing so causes Windows to fail to start. It doesn't know how to automatically fix the problem. I was able to use Regedit from the recovery area, in order to try using this fix that worked for Vista. That key is missing in Windows 8, however. I read that the relevant key is now in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\storahci. But my settings already match the changes they suggest making. How can I get Windows to boot after enabling AHCI in the BIOS?

    Read the article

  • SDLC/Deployment/Documentation ERP/framework that minimizes developer misery

    - by foampile
    I was wondering if there are favorite SDLC/Deployment/Documentation/Versioning ERP/frameworks that work with popular SDLC methodologies, such as Agile, that minimize developer exposure to what most programmer hate to do most -- PAPERWORK ? Often, release management is extremely inefficient and there is a lot of data duplication across documents that are required to accompany changes -- e.g. when submitting a deployment request, I must list all files and their revisions from source control -- but why is that necessary if every file revision I check in is pinned to a work order and a deployment request is just a list of work orders -- such info should be able to be pulled from the system automatically without me needing to extract it and report it. And then there is a backout plan -- well just do everything in reverse from what you did to deploy -- why do you need specific instructions? Similar applies for documentation... So I am curious if there is an overall, all-encompassing ERP that includes source control and minimizes paperwork by sharing centralized data across different documents (such as documentation being pulled from javadoc without needing to write it separately) associated with SDLC yet does not compromise structure and control over the code base and release management.

    Read the article

  • Resolve Instructional Webcast Series — E-Business Suite Payables Period Close

    - by user793044
    Resolve Instructional Webcast Series — New Product Specific Troubleshooting Topics For E-Business we have coming up: Title: Resolve—Best Practices for E-Business Suite Payables Period Close Date: Nov 7, 2012 Time: 8:00 am MT - 3:00 pm GMT - 10:00 am Eastern - 8:30 pm India - 7:00 am Pacific This one-hour webcast shows you how to use 3 main recommendations: Period Close Helper – New Diagnostic to identify and resolve period close issues. Master Generic Datafix Diagnostic (MGD) usage in proactive/reactive mode. Recommended Patch Collection (RPC) uptake. This session will help customers to plan and complete their month on month period close activities successfully. Also, in approaching period close in a proactive way. It will assist in order to avoid last minute hassles and prevent delays in achieving the Period Close deadlines. Customers who are involved in the Payables period close activities (both Functional and Technical) will benefit most from the webcast. Join us. Leverage this opportunity to learn Support Best Practices that help you resolve the issues you face with your Oracle products. Oracle Support experts provide live demonstrations of proactive resources. You will see you how working proactively helps you work more efficiently—from using the right tools to providing the right information on Service requests—you can get answers faster. Register for sessions now Resolve—Troubleshooting Questions? Contact Oracle’s "Get Proactive" team today. WORK SMART. SOLVE FAST. RESOLVE.

    Read the article

  • Use Dynamic DNS to access Java servlet, still need port forward ?

    - by Frank
    If I use Dynamic DNS such as the free service at https://www.dyndns.com, do I still need to set up static IP and do port forward ? I have a DSL, most likely with dynamic IP address, and I run a Java servlet to get Paypal IPN messages on my notebook, in order for the messages to reach my notebook, I : [1] set up static IP and [2] did port forwarding. But I found each time the PC re-starts, it has a different external IP, so I was suggested to [3] get Dynamic DNS service like the free one mentioned above, but now I'm a bit confused, if I have step [3], do I still need to do [1] and [2], isn't step [3] supposed to do [1] and [2] for me ? But since I've already done [1],[2], now I wonder if they would cause trouble for step [3], do I need to undo them ? Or do I need all of them together ?

    Read the article

  • local cache for NAS or network folder

    - by HugoRune
    I am planning to build a network attached storage (NAS) server. Is there a way to cache frequently acccessed files from the remote storage automatically on the local PC? (I am not looking for a way to sync whole folders like rsync, but rather something that automatically and transparently caches the last accessed 50 gb of files.) Ideally I am searching for something that caches writes as well as reads, since only one pc will be accessing the server (and one day of lost changes if the local cache is damaged would be acceptable) I looked into windows offline files, but as far as I could tell this requires manual interaction to disconnect the server or go into offline mode in order to use the cache. The server would probably be running Linux or freeNAS, the pc runs Windows xp, but could be upgraded to 7 if required.

    Read the article

  • BPM 11g and Human Workflow Shadow Rows by Adam Desjardin

    - by JuergenKress
    During the OFM Forum last week, there were a few discussions around the relationship between the Human Workflow (WF_TASK*) tables in the SOA_INFRA schema and BPMN processes.  It is important to know how these are related because it can have a performance impact.  We have seen this performance issue several times when BPMN processes are used to model high volume system integrations without knowing all of the implications of using BPMN in this pattern. Most people assume that BPMN instances and their related data are stored in the CUBE_*, DLV_*, and AUDIT_* tables in the same way that BPEL instances are stored, with additional data in the BPM_* tables as well.  The group of tables that is not usually considered though is the WF* tables that are used for Human Workflow.  The WFTASK table is used by all BPMN processes in order to support features such as process level comments and attachments, whether those features are currently used in the process or not. For a standard human task that is created from a BPMN process, the following data is stored in the WFTASK table: One row per human task that is created The COMPONENTTYPE = "Workflow" TASKDEFINITIONID = Human Task ID (partition/CompositeName!Version/TaskName) ACCESSKEY = NULL Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki

    Read the article

  • Is it Secure to Grant Apachie User Ownership of Directories & Files for Wordpress

    - by Oudin
    I'm currently setting up WordPress on an Ubuntu server 12 everything runs fine but there is an issue when it comes to automatically updating and uploading media via WP as Apache "www-data" user does not have permissions to write to the directories. "user1" has full permission All my directories have permissions of 0755 and files 644 my directories setup is as follows: /home/user1/public_html All WP files and directories are in "public_html" In order to work around the auto updating and uploading media I've granted Apache user ownership to the following directories sudo chown www-data:www-data wp-content -R sudo chown www-data:www-data wp-includes -R sudo chown www-data:www-data wp-admin -R I would like to know security wise how secure this is and if it is not secure what would be the best solution? That will allow me to keep all files and directories owned by user1 and still allow wp to be able to automatically update and uploading media

    Read the article

  • What are your suggestions on learning how to think?

    - by Jonathan Khoo
    First of all, this is not the generic 'make me a better programmer' question, even though the outcome of asking this question might seem similar to it. On programmers.SE, I've read and seen these get closed here, here, here, here, and here. We all know there are a multitude of generic suggestions to hone your programming skills (e.g reading SO, reading recommended books, following blogs, getting involved in open-source projects, etc.). This is not what I'm after. I also acknowledge the active readership on this web site and am hoping it works in my favour by yielding some great answers. From reading correspondence here, there appears to be a vast number of experienced people who are working, or have worked, programming-related fields. And most of you can convey thoughts in an eloquent, concise manner. I've recently noticed the distinction between someone who's capable of programming and a programmer who can really think. I refuse to believe that in order to become great at programmer, we simply submit ourselves to a lifetime of sponge-like behaviour (i.e absorb everything related to our field by reading, listening, watching, etc.). I would even state that simply knowing every single programming concept that allows you to solve problem X faster than everyone around you, if you can't think, you're enormously limiting yourself - you're just a fast robot. I like to believe there's a whole other face of being a great programmer which is unrelated to how much you know about programming, but it is how well you can intertwine new concepts and apply them to your programming profession or hobby. I haven't seen anyone delve into, or address, this facet of the human mind and programming. (Yes, it's also possible that I haven't looked hard enough too - sorry if that's the case.) So for anyone who has spent any time thinking about what I've mentioned above - or maybe it's everyone here because I'm a little behind in my personal/professional development - what are your suggestions on learning how to think? Aside from the usual reading, what else have you done to be better than the other people in your/our field?

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • Setting Up IRM Test Content

    - by martin.abrahams
    A feature of the 11g IRM Server that sometimes gets overlooked is the ability to set up some test content that any IRM user can access to verify that their IRM Desktop can reach the server, authenticate successfully, and render protected content successfully. Such test content is useful for new users, and in troubleshooting scenarios. Here's how to set up some test content... In the management console, go to IRM - Administration - Test Content, as shown. The console will display a list of test content - initially an empty list. Use the Add option to specify the URL of a document or image, and define one or more labels for the test content in whichever languages your users favour. Note that you do not need to seal the image or document in order to use it as test content. Nor do you need to set up any rights for the test content. The IRM Server will handle the sealing and rights assignment automatically such that all authenticated users are authorised to view the test content. Repeat this process for as many different types of content as you would like to offer for test purposes - perhaps a Word document, a PDF document, and an image. To keep things simple the first time I did this, I used the URL of one of the images in the IRM Server's UI - so there was no problem with the IRM Server being able to reach that image. Whatever content you want to use, the IRM Server needs to be able to reach it at the URL you specify. Using Test Content Open a browser and browse to the URL that the IRM Desktop normally uses to access the IRM Server, for example: http://irm11g.oracle.com/irm_desktop If you are not sure, you can find this URL in the Servers tab of the IRM Options dialog. Go to the Test tab, and you will see your test content listed. By opening one of the items, you can verify that your IRM Desktop is healthy and that you can authenticate to the IRM Server.

    Read the article

  • Git version control with multiple users

    - by ignatius
    Hello, i am a little bit lost with this issue, let me explain you my problem: I want to setup a git repository, three of four users will contribute, so they need to download the code and shall be able to upload their changes to the server or update their branch with the latest modifications. So, i setup a linux machine, install git, setup the repository, then add the users in order to enable the acces throught ssh. Now my question is, What's next?, the git documentation is a little bit confusing, i.e. when i try from a dummy user account to clone the repository i got: xxx@xxx-desktop:~/Documentos/git/test$ git clone -v ssh://[email protected]/pub.git Initialized empty Git repository in /home/xxx/Documentos/git/test/pub/.git/ [email protected]'s password: fatal: '/pub.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly is that a problem of privileges? need any special configuration? i want to avoid using git-daemon or gitosis, sorry, maybe my question sound silly but git is powerfull but i admit not so user friendly. Thanks Br

    Read the article

  • How to Remove a VM From Hyper-V Without Deleting the Configuration File?

    - by Steven Murawski
    I'm in the process of moving a number of virtual machines that are homed on shared storage (a file share, though shared cluster disk would work as well) to a new VM host with access to the same shared storage. The new host is a different build version (moving from Windows Server 2012 Beta to Windows Server 2012 RC - though this same process could be used with migrations of Windows Server 2008/2008 R2 to Windows Server 2012 as well), so I cannot migrate the machine with inbox tooling. I need to remove the VM from management of the source Hyper-V host in order to import the VM to the new Hyper-V host. I want to retain the configuration file, so I can import the VM as it stands and not need to reconfigure it. The VHD files are rather large and they are staying on the same file share, so I'd rather not duplicate them during the move process.

    Read the article

  • How to find the static ip address of my router?

    - by OSX NINJA
    I bricked my Linksys WRT54GS router when trying to change the firmware on it from dd-wrt to open-wrt. In order to unbrick it, I need to be able to do an ftp transfer to it. The problem is that it isn't using DHCP addressing and I can't just use the default ip address of 192.168.1.1. I have to use the ip address it was set at before it got bricked. The problem is I forgot what that number was. Is there some program or script that can find it out?

    Read the article

  • Upgrade to Genuine Windows 8 Pro from non genuine Windows 7

    - by mark
    I have a computer with non-genuine windows 7 (cracked with windows loader). I was thinking of buying / upgrading to Windows 8 Pro. I ran Windows8-UpgradeAssistant.exe and was said that I can upgrade to Windows 8 Pro. Can I perform a clean upgrade (format and install) from my current windows 7 to windows 8? In future, in order to re-install Windows 8 do I need to re-install the non-genuine Windows 7 and install on top of it? If my hard disk crash, or I want to install on a new hard disk (clean install), do I need to install windows 7 again before upgrading to Windows 8? If I don't like Windows 8, can I downgrade to Windows 7 genuine?

    Read the article

  • What's a good approach to adding debug code to your application when you want more info about what's going wrong?

    - by Andrei
    When our application doesn't work the way we expect it to (e.g. throws exceptions etc.), I usually insert a lot of debug code at certain points in the application in order to get a better overview of what exactly is going on, what the values for certain objects are, to better trace where this error is triggered from. Then I send a new installer to the user(s) that are having the problem and if the problem is triggered again I look at the logs and see what they say. But I don't want all this debug code to be in the production code, since this would create some really big debug files with information that is not always relevant. The other problem is that our code base changes, and the next time, the same debug code might have to go in different parts of the application. Questions Is there a way to merge this debug code within the production code only when needed and have it appear at the correct points within the application? Can it be done with a version control system like git so that all would be needed is a git merge? P.S. The application I'm talking about now is .NET, written in C#.

    Read the article

  • [MINI HOW-TO] Disable Third Party Extensions in Internet Explorer

    - by Mysticgeek
    Are you looking for a way to make browsing to sites you’re not sure of in Internet Explorer a bit more secure? Here we take a quick look at how to disable third-party extensions in IE. Open up Internet Explorer and click on Tools then select Internet Options… Under Internet Properties click on the Advanced tab and under Settings scroll down and uncheck Enable third-party browser extensions and click Ok. Now restart IE and the extensions will be disabled. You can then re-enable them when you know a site can be trusted and you want to be able to use its services. This will help avoid malware when you visit a site that wants to install an extension and you’re not sure about it. Similar Articles Productive Geek Tips Block Third-Party Cookies in IE7Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPRemove PartyPoker (Or Other Items) from the Internet Explorer Tools MenuDisable Tabbed Browsing in Internet Explorer 7Make Ctrl+Tab in Internet Explorer 7 Use Most Recent Order TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find Downloads and Add-ins for Outlook Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa !

    Read the article

  • Tools of the Trade

    - by Ajarn Mark Caldwell
    I got pretty excited a couple of days ago when my new laptop arrived. “The new phone books are here!  The new phone books are here!  I’m a somebody!” - Steve Martin in The Jerk It is a Dell Precision M4500 with an Intel i7 Core 2.8 GHZ running 64-bit Windows 7 with a 15.6” widescreen, 8 GB RAM, 256 GB SSD.  For some of you high fliers, this may be nothing to write home about, but compared to the 32–bit Windows XP laptop with 2 GB of RAM and a regular hard disk that I’m coming from, it’s a really nice step forward.  I won’t even bore you with the details of the desktop PC I was first given when I started here 5 1/2 years ago.  Let’s just say that things have improved.  One really nice thing is that while we are definitely running a lean and mean department in terms of staffing, my boss believes in supporting that lean staff with good tools in order to stay lean instead of having to spend even more money on additional employees.  Of course, that only goes so far, and at some point you have to add more people in order to get more work done, which is why we are bringing on-board a new employee and a new contract developer next week.  But that’s a different story for a different time. But the main topic for this post is to highlight the variety of tools that I use in my job and that you might find useful, too.  This is easy to do right now because the process of building up my new laptop from scratch has forced me to assemble a list of software that had to be installed and configured.  Keep in mind as you look through this list that I play many roles in our company.  My official title is Software Engineering Manager, but in addition to managing the team, I am also an active ASP.NET and SQL developer, the Database Administrator, and 50% of the SAN Administrator team.  So, without further ado, here are the tools and some comments about why I use them: Tool Purpose Virtual Clone Drive Easily mount an ISO image as a DVD Drive.  This is particularly handy when you are downloading disk images from Microsoft for your tools. SQL Server 2008 R2 Developer Edition We are migrating all of our active systems to SQL 2008 R2.  Developer Edition has all the features of Enterprise Edition, but intended for development use. SQL Server 2005 Developer Edition (BIDS ONLY) The migration to SSRS 2008 R2 is just getting started, and in the meantime, maintenance work still has to be done on the reports on our SQL 2005 server.  For some reason, you can’t use BIDS from 2008 to write reports for a 2005 server.  There is some different format and when you open 2005 reports in 2008 BIDS, it forces you to upgrade, and they can no longer be uploaded to a 2005 server.  Hopefully Microsoft will fix this soon in some manner similar to Visual Studio now allows you to pick which version of the .NET Framework you are coding against. Visual Studio 2010 Premium All of our application development is in ASP.NET, and we might as well use the tool designed for it. I’ve used a version of Visual Studio going all the way back to VB 6.0 and Visual Interdev. Vault Professional Client Several years ago we replaced Visual Source Safe with SourceGear Vault (then Fortress, and now Vault Pro), and I love it.  It is very reliable with low overhead - perfect for a small to medium size development team.  And being a small ISV, their support is exceptional. Red-Gate Developer Bundle with the SQL Source Control update for Vault I first used, and fell in love with, SQL Prompt shortly before Red-Gate bought it, and then Red-Gate’s first release made me love it even more.  SQL Refactor (which has since been rolled into the latest version of SQL Prompt) has saved me many hours and migraine’s trying to understand somebody else’s code when their indenting was nonexistant, or worse, irrational.  SQL Compare has been awesome for troubleshooting potential schema issues between different instances of system databases.  SQL Data Compare helped us identify the cause behind a bug which appeared in PROD but could not be reproduced in a nearly (but not quite exactly) identical copy in UAT.  And the newest tool we are embracing: SQL Source Control.  I blogged about it here (and here, and here) last December.  This is really going to help us keep each developer’s copy of the database in sync with one another. Fiddler Helps you watch the whole traffic stream on web visits.  Haven’t used it a lot, but it did help me track down some odd 404 errors we were finding in our own application logs.  Has some other JavaScript troubleshooting capabilities, but some of its usefulness has been supplanted by the Developer Tools option in IE8. Funduc Search & Replace Find any string anywhere in a mound of source code really, really fast.  Does RegEx searches, if you understand that foreign language.  Has really helped with some refactoring work to pinpoint, for example, everywhere a particular stored procedure is referenced, whether in .NET code or other SQL procedures (which we have in script files).  Provides in-context preview of the search results.  Fantastic tool, and a bargain price. SciTE SciTE is a Scintilla based Text Editor and it is a fantastic, light-weight tool for quickly reviewing (or writing) program code, SQL scripts, and extract files.  It has language-specific syntax highlighting.  I used it to write several batch and CMD programs a year ago, and to examine data extract files for exchanging information with other systems.  Extremely handy are the options to View End of Line and View Whitespace.  Ever receive a file that is supposed to use CRLF as an end-of-line marker, but really only has CRs?  SciTE will quickly make that visible. Infragistics Controls We do a lot of ASP.NET development, and frequently use the WebGrid, WebTab, and date picker controls.  We will likely be implementing the Hierarchical Data Grid soon.  Infragistics has control suites for WebForms, WinForms, Silverlight, and coming soon MVC/JQuery. WinZip - WITH Command-Line add-in The classic compression program with a great command-line interface that allows me to build those CMD (and soon PowerShell) programs for automated compression jobs.  Our versioned Build packages are zip files. XML Notepad Haven’t used this a lot myself, but one of my team really likes it for examining large XML files. LINQPad Again, haven’t used this one a lot, but it was recommended to me for learning and practicing my LINQ skills which will come in handy as we implement Entity Framework. SQL Sentry Plan Explorer SQL Server Show Plan on steroids.  Great for helping you focus on the parts of a large query that are of most importance.  Also great for just compressing the graphical plan into more readable layout. Araxis Merge A great DIFF and Merge tool.  SourceGear provides a great tool called DiffMerge that we use all the time, but occasionally, I like the cross-edit capabilities of Araxis Merge.  For a while, we also produced DIFF reports in HTML that showed all the changes that occurred between two releases.  This was most important when we were putting out very small, but very important hot fixes on a very politically hot system.  The reports produced by Araxis Merge gave the Director of IS assurance that we were not accidentally introducing ripples throughout the system with our releases. Idera SQL Admin Toolset A great collection of tools including a password checker to help analyze your SQL Server for weak user passwords, a Backup Status tool to quickly scan a large list of servers and databases to identify any that are overdue for backups.  Particularly helpful for highlighting new databases that have been deployed without getting included in your backup processing.  I also like Space Analyzer to keep an eye on disk space consumed by database files. Idera SQL Job Manager This free tool provides a nice calendar view of SQL Server Job Schedules, but to a degree, you also get what you pay for.  We will be purchasing SQL Sentry Event Manager later this year as an even better job schedule reviewer/manager.  But in the meantime, this at least gives me a good view on potential resource conflicts across multiple instances of SQL Server. DBFViewer 2000 I inherited a couple of FoxPro databases that I have to keep an eye on occasionally and have not yet been able to migrate them to SQL Server. Balsamiq Mockups We are still in evaluation-mode on this tool, but I really like it as a quick UI mockup tool that does not require Visual Studio, so someone other than a programmer can do UI design.  The interface looks hand-drawn which definitely has some psychological benefits when communicating to users, too. FeedDemon I have to stay on top of my WAY TOO MANY blog subscriptions somehow.  I may read blogs on a couple of different computers, and FeedDemon’s integration with Google Reader allows me to keep them all in sync.  I don’t particularly like the Google Reader interface, or the fact that it always wanted to mark articles as read just because I scrolled past them.  FeedDemon solves this problem for me, and provides a multi-tabbed interface which is good because fairly frequently one blog will link to something else I want to read, and I can end up with a half-dozen open tabs all from one article. Synergy+ In my office, I run four monitors across two computers all with one mouse and keyboard.  Synergy is the magic software that makes this work. TweetDeck I’m not the most active Tweeter in the world, but when I want to check-in with the Twitterverse, this really helps.  I have found the #sqlhelp and #PoshHelp hash tags particularly useful, and I also have columns setup to make it easy to monitor #sqlpass, #PASSProfDev, and short term events like #sqlsat68.   Whew!  That’s a lot.  No wonder it took me a couple of days to get everything setup the way I wanted it.  Oh, that and actually getting some work accomplished at the same time.  Anyway, I know that is a huge dump of info, and most people never make it here to the end, so for those who did, let me say, CONGRATULATIONS, you made it! I hope you’ll find a new tool or two to make your work life a little easier.

    Read the article

  • hplip gui required plugin

    - by Terence Stamp
    I downloaded hplip gui to manage my printer, but in order to set it up correctly, you must click the green puzzle piece labeled "install required plugin." Once you do, you are presented with two options: download it from HP's server or locate the file locally on your hard disk. In the past, I have had success with downloading it from HP's server. Currently, my luck is not as good. My question is simple. Where can I find the plugin on the Internet so that I might download it and install it using the second option of installing from my hard drive?

    Read the article

  • How to reference jQuery in SharePoint2010

    - by ybbest
    In normal asp.net development, in order to add jQuery to your solution you need to add the following script to your Master page. <script language=”javascript” type=”text/javascript” src=”Scripts/jquery-1.4.1.min.js”></script> There are not many differences in referencing jQuery in SharePoint2010; in fact you got quite a few ways to achieve this. The first thing you need to do is to deploy jQuery using SharePoint module template in Visual studio. Then you can choose one of the following ways of referencing jQuery. 1. Using a Delegate Control 2. in the master Page 3. Ad hoc (e.g. in a site page or web part) 4. Using a Custom Action (Can be used as Sandbox solution, you can find example here.) References: jquery How to bootstrap JQuery on every SharePoint page, even in the Sandbox Referencing Javascript Files with SharePoint 2010 Custom Actions using SciptSrc

    Read the article

  • Connect to SVN repository with Netbeans using SVN+SSH

    - by shuby_rocks
    Hello all, I am trying to connect to a SVN server in order to import my project into it with svn+ssh authentication method. I am using the NetBeans IDE (6.8) with subversion plugin installed on Windows XP SP2. I have plink installed with its path set in the Windows PATH env variable. When I use the similar looking repository URL (XXXX and YYYY replaced with sensible things) svn+ssh://XXXX@YYYY/home/dce/svn/trunk along with this external tunnel command plink -l <myUserName> -i C:\\privateKey.ppk I keep getting this error: org.tigris.subversion.javahl.ClientException: Network connection closed unexpectedly I searched about it on the Internet and tried many things but didn't work out. Please help if anybody has some idea what may be going wrong. Thanks a lot in advance.

    Read the article

  • PC version of Google Chrome doesn't recognize ".local" domain name

    - by prosseek
    With Bonjour installed in PC, I can access my server in Mac with ".local". For example, I can access my mac with the name "prosseek.local". The problem is that in Chrome for PC, it doesn't recognize "local" to open search page instead of accessing mac server. This issue isn't happening with other web browsers (explore/firefox) in PC. What is even wierder is that chrome seems to recognize the ".local" sometimes, but not always. How to solve this issue? Or, how can I teach chrome that ".local" is a part of page name in order not to direct to search page?

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >