Search Results

Search found 62606 results on 2505 pages for 'sql files'.

Page 379/2505 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • Using a PowerShell Script to delete old files for SQL Server

    Many clients are using custom stored procedures or third party tools to backup databases in production environments instead of using database maintenance plans. One of the things that you need to do is to maintain the number of backup files that exist on disk, so you don't run out of disk space. There are several techniques for deleting old files, but in this tip I show how this can be done using PowerShell.

    Read the article

  • sqlcmd won't execute - Claims Native Client not installed properly

    - by nuit9
    I'm trying to use sqlcmd to execute some SQL scripts. Using a test command with a simple query like: sqlcmd -S HOSTNAME -d MYDATABASE -Q 'SELECT Names FROM Customers' sqlcmd does not appear to make any attempt to connect to the server as it displays this message: Sqlcmd: Error: Connection failure. SQL Native Client is not installed correctly. To correct this, run SQL Server Setup. The native client was presumably installed as part of the SQL Server setup and likely correctly. I actually get this message on any machine with SQL server installed trying to use sqlcmd so it's not a matter of the installation being corrupt. Unfortunately the message really tells me nothing about the problem so I don't know what the real issue is. I know the SQL Native client is working properly since a vbscript was able to execute SQL queries against the database. Is there some additional configuration needed to use sqlcmd?

    Read the article

  • Files not being copied to AFP volume when copying through the Finder

    - by cefstat
    I am trying to copy files from my Macbook's hard disk to my NAS. The latter is a ReadyNAS Duo and is mounted as an AFP volume. The files are about 5MB each and I copy them by selecting in a Finder window all the files that I need and then dropping them onto the destination directory. Almost always some of the files do not get copied to the NAS. For example, if I select 200 files and then start the copying, everything looks at the beginning normal (while the copying takes place the Finder window for the destination directory is updated to show 200 files while it was empty before), but after the copying ends the destination directory shows less than 200 files (let's say 190). If I copy again the same 200 files to the NAS, without replacing already copied files, the remaining 10 files are usually copied correctly. In a few cases, I have to repeat the process a third time. Notice that the Finder does not give any warning that some of the files have not been copied at any stage. I am wondering if this a known problem with AFP and the Finder and/or if there is something that I can do to solve this problem.

    Read the article

  • Running SQL*Plus with bash causes wrong encoding

    - by Petr Mensik
    I have a problem with running SQL*Plus in the bash. Here is my code #!/bin/bash #curl http://192.168.168.165:8080/api_test/xsql/f_exp_order_1016.xsql > script.sql wget -O script.sql 192.168.168.165:8080/api_test/xsql/f_exp_order_1016.xsql set NLS_LANG=_.UTF8 sqlplus /nolog << ENDL connect login/password set sqlblanklines on start script.sql exit <<endl I download the insert statements from our intranet, put it into sql file and run it through SQL*Plus. This is working fine. My problem is that when I save the file script.sql my encoding goes wrong. All special characters(like íášc) are broken and that's causing inserting wrong characters into my DB. Encoding of that file is UTF-8, also UTF-8 is set on the XSQL page on our intranet. So I really don't know where could be a problem. And also any advices regarding to my script are welcomed, I am total newbie in Linux scripting:-)

    Read the article

  • File corruption after copying files in Windows 7 64 bit using two methods

    - by DustByte
    I have 5000 pictures and other files in a directory taking up 35 GB. I want to duplicate this directory. Method 1: I do a simple copy and paste of the directory in explorer. I have the habit of checking the checksums after copying important files. In this case I noticed that around 2000 files failed the MD5 test. At a closer inspection of a randomly chosen JPEG with different checksums it turns out that some XMP metadata had changed. In particular, the tag <MicrosoftPhoto:DateAcquired> had changed the date from 2009 to today (possibly around the time I was copying the files). I have no idea what triggered this XMP data to be changed and exactly when it was changed and why for these particular files, but at least it seems to explain the checksum discrepancy. Method 2: As I want the exact files to be duplicated, I tried the program FreeFileSync to mirror the directory, hoping no XMP metadata would mysteriously change. A checksum test in addition to a thorough file comparison test in FreeFileSync lead to two similar but yet different results: 31 files fail the checksum test, 23 files fail the file comparison test. The smaller set is not entirely contained in the bigger set, although many files occur in both. What is alarming here is that not only JPEGs are flagged as altered but also som AVIs, MPGs and a large 7-zip file. Closer inspection of a JPEG indicates that it is indeed corrupt: the bottom half of the picture is simply plain gray. Due to the size of the 7-zip file, I have not been able to pin down the discrepancy. Note, in both methods, every file has its correct file size after being copied. Question: Any thoughts on what is possibly going on here? I have never had this problem before, and I am now terrified that files get corrupted after simple actions like copy/paste and file sync. Even if I manage to successfully copy the files somehow, I would still like an explanation to this.

    Read the article

  • Organizing automatically Windows Files and Folders

    - by Kiquenet
    For Windows only, Organizing the eleventy-billion files you've got stuffed into folders on your hard drive is very "hard". For example, I have one folder on my computer that I save all web downloads to, regardless of file type, size or purpose. Many of the files are only temporary downloads, for instance setup files of applications that I test, demonstration videos that I watch once or documents that I want to read. Some files on the other hand are there to stay, and I used to move them out of the download folder manually in the past. Another files in folders in my computer: many source code, tests, programs, tools, ... I need tecnology for organize billion files. Which best tools for organize, sort, etc automatically your files-folders? Digital Janitor http://davidevitelaru.com/software/digital-janitor/ Belverede http://lifehacker.com/5510961/how-to-automatically-clean-and-organize-your-desktop-downloads-and-other-folders Download Mover http://www.neoteo.com/download-mover-reorganiza-tus-descargas-14188 File/Folder Date Organizer http://seedling.dcmembers.com/other/ffdorg.zip DropIt http://www.lupopensuite.com/db/dropit.htm Others issues about organization files, desktop, etc How to automate the process of organizing audio files on Windows Organizing My Windows Desktop What's a good way for organizing PDF documents on Windows? Folksonomy tagging for files What is your method of “folksonomy” tagging for files on your local machine?

    Read the article

  • SQLIO help decipher output

    - by SQL Learner
    When load testing on a SQL Server Box, using following (testfile is 25 GB) sqlio -kW -t8 -s360 -o8 -frandom -b8 -BH -LS g:\testfile.dat > result.txt sqlio -kW -t8 -s360 -o8 -frandom -b64 -BH -LS g:\testfile.dat >> result.txt sqlio -kW -t8 -s360 -o8 -frandom -b128 -BH -LS g:\testfile.dat >> result.txt sqlio -kW -t8 -s360 -o8 -frandom -b256 -BH -LS g:\testfile.dat >> result.txt Can anyone help me decipher output.. I do not understand latency min and average....? What does this number means IOs/sec: 10968.80 MBs/sec: 685.55 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 5 Max_Latency(ms): 21

    Read the article

  • What is the best strategy for transforming unicode strings into filenames?

    - by David Cowden
    I have a bunch (thousands) of resources in an RDF/XML file. I am writing a certain subset of the resources to files -- one file for each, and I'm using the resource's title property as the file name. However, the titles are every day article, website, and blog post titles, so they contain characters unsafe for a URI (the necessary step for constructing a valid file path). I know of the Jersey UriBuilder but I can't quite get it to work for my needs as I detailed in a different question on SO. Some possibilities I have considered are: Since each resource should also have an associated URL, I could try to use the name of the file on the server. The down side of this is sometimes people don't name their content logically and I think the title of an article better reflects the content that will be in each text file. Construct a white list of valid characters and parse the string myself defining substitutions for unsafe characters. The downside of this is the result could be just as unreadable as the former solution because presumably the content creators went through a similar process when placing the files on their server. Choose a more generic naming scheme, place the title in the text file along with the other attributes, and tell my boss to live with it. So my question here is, what methods work well for dealing with a scenario where you need to construct file names out of strings with potentially unsafe characters? Is there a solution that better fills out my constraints?

    Read the article

  • Beginner Geek: Scan Files for Viruses Before Using Them

    - by Mysticgeek
    To help avoid getting your computer infected by malicious software, it’s a good idea to scan files before executing them. Today we take a look at a couple of options that will let you scan files easily from your desktop. Scan File with Your Antivirus Software Most Antivirus software will put an option in the context menu so you can scan individual files. After downloading a file or email attachment, simply right-click the file and select the option to scan with your Antivirus software. If you want to scan more than one at a time, hold down the Ctrl key while you clicking each file you want to scan. Then right-click and select to scan with your Antivirus software. Here is our favorite Antivirus app, Microsoft Security Essentials scanning a couple of files. If a virus is found, your Antivirus app will delete it or put it in Quarantine so it cannot infect your system. Using VirusTotal Uploader To be very thorough and want a second opinion (actually 41), then you might want to check out the VirusTotal Uploader. This handy app will scan your files with 41 different Antivirus apps online. After installing VirusTotal Uploader, right-click the file, go to Send To, then VirusTotal. Alternately you can launch VirusTotal Uploader and Get and upload the file. It will send the file to VirusTotal.com and scan it with 41 different Antivirus apps and show you the results.   If you don’t want to install the Uploader, you can go to the VirusTotal site and upload a file from there to scan. We’ve noticed that occasionally there will be a false positive detected on files we know are clean. Sometimes the definition database of an Anti-malware app isn’t current, or an obscure Antivirus App will find something questionable. If that is the case, use your best judgment when viewing the results. Conclusion Most Antivirus apps today have real-time scanning and should be able to detect possible infections before you’re able to execute them. However, if they don’t or when in doubt, following these tips can save you a lot of headaches in the long run. If you use a lot of different flash drives throughout the day, check out our article on how to scan a thumb drive for viruses from the AutoPlay Dialog. Download Microsoft Security Essentials Download VirusTotal Uploader VirusTotal Website Similar Articles Productive Geek Tips Scan Files for Viruses Before You Download With Dr.WebMake Microsoft Security Essentials Scan Faster by Excluding Certain File TypesBeginner Geek: Delete User Accounts in Windows 7Scan Your Thumb Drive for Viruses from the AutoPlay DialogSecure Computing: Free Anti-Virus Protection With AVG Free Edition TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox

    Read the article

  • What is a widely accepted term for a string variable that would probably contain a file path and file name?

    - by Peter Turner
    For functions that need to index files in a directory and rename them FileName0001, FileName0002, etc... I often need to write a function that splits the file name from the file path and rename the file. When I put the file name and file path back together, I don't have a very good name for the variable that contains both of them and I usually just wind up concatenating them every time I want to use them (usually using them as parameters for functions labeled either filename or filepath) so I never really know what I'm doing until I notice a lot of files being written in the same directory as my binaries. Anyway, what do I call a file name and a file path? I don't want to call it File, because that usually means the binary information behind the file. I don't want to call it URI because that usually means I've got some sort of protocol, which I don't. I just want a good way to denote "c:\somedir\somedir\somedir\somefile.txt" so as to deconfuse this mess I've just realized I'm in. Please don't just list your personal preference. I think an excellent answer should "'site its sources". (as in, provide a link to a repository with a good example of the code being used as I described)

    Read the article

  • SQL Server 2005: Improving performance for thousands or Insert requests. logout-login time= 120ms.

    - by Rad
    Can somebody shed some lights on how SQL Server 2005 deals with may request issued by a client using ADO.NET 2.0. Below is the shortend output of SQL Trace. I can see that connection pooling is working (I believe there is only one connection being pooled). What is not clear to me is why we have so many sp_reset_connection calls i.e a series of: Audit Login, SQL:BatchStarting, RPC:Starting and Audit Logout for each loop in for loop below. I can see that there is constant switching between tempdb and master database which leads me to conclude that we lost the context when next connection is created by fetching it from the pool based on ConectionString argument. I can see that every 15ms I can get 100-200 login/logout per second (reported at the same time by Profiler). The after 15ms I have again a series fo 100-200 login/logout per second. I need clarification on how this might affect much complex insert queries in production environment. I use Enterprise Library 2006, the code is compiled with VS 2005 and it is a console application that parses a flat file with 10 of thousand of rows grouping parent-child rows, runs on an application server and runs 2 stored procedure on a remote SQL Server 2005 inserting a parent record, retrieves Identity value and using it calls the second stored procedure 1, 2 or multiple times (sometimes several thousands) inserting child records. The child table has close to 10 million records with 5-10 indexes some of them being covering non-clustered. There is a pretty complex Insert trigger that copies inserted detail record to an archive table. All in all I only have 7 inserts per second which means it can take 2-4 hours for 50 thousand records. When I run Profiler on the test server (that is almost equivalent with production server) I can see that there is about 120ms between Audit Logout and Audit Login trace entries which almost give me chance to insert about 8 records. So my question is if there is some way to improve inserting of records since the company loads 100 thousands of records and does daily planning and has SLA to fulfill client request coming as flat file orders and some big files 10 thousands have to be processed(imported quickly). 4 hours to import 60 thousands should be reduced to 30 minutes. I was thinking to use BatchSize of DataAdapter to send multiple stored procedure calls, SQL Bulk inserts to batch multiple inserts from DataReader or DataTable, SSIS fast load. But I don't know how to properly analyze re-indexing and stats population and maybe this has to take some time to finish. What is worse is that the company uses the biggest table for reporting and other online processing and indexes cannot be dropped. I manage transaction manually by setting a field to a value and do an transactional update changing that value to a new value that other applications are using to get committed rows. Please advise how to approach this problem. For now I am trying to have a staging tables with minimal logging in a separate database and no indexes and I will try to do batched (massive) parent child inserts. I believe Production DB has simple recovery model, but it could be full recovery. If DB user that is being used by my .NET console application has bulkadmin role does it mean its bulk inserts are minimally logged. I understand that when a table has clustered and many non-clustered indexes that inserts are still logged for each row. Connection pooling is working, but with many login/logouts. Why? for (int i = 1; i <= 10000; i++){ using (SqlConnection conn = new SqlConnection("server=(local);database=master;integrated security=sspi;")) {conn.Open(); using (SqlCommand cmd = conn.CreateCommand()){ cmd.CommandText = "use tempdb"; cmd.ExecuteNonQuery();}}} SQL Server Profiler trace: Audit Login master 2010-01-13 23:18:45.337 1 - Nonpooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.337 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.337 Audit Logout tempdb 2010-01-13 23:18:45.337 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled

    Read the article

  • How to count differences between two files on linux?

    - by Zsolt Botykai
    Hi all, I need to work with large files and must find differences between two. And I don't need the different bits, but the number of differences. For the differ rows I come up with diff --suppress-common-lines --speed-large-files -y File1 File2 | wc -l And it works, but is there a better way to do it? And how to count the exact number of differences (with standard tools like bash, diff, awk, sed some old version of perl)? Thanks in advance

    Read the article

  • Problem with files consists of spaces and single quotes?

    - by Vijay
    I'using the following code to create thumbnails using ffmpeg but it was working fine for the files which have no spaces or any quotes.. But when the file has a space (like 'sachin knock.flv') or files which have quotes (like sachin's_double_cent.mp4) it doesn't work.. What can i do to get those files work accurately? One restriction is that i can't rename files as they are lump some.. My code is <?php error_reporting(E_ALL); extension_loaded('ffmpeg') or die('Error in loading ffmpeg'); $link = mysql_connect('localhost', 'root', ''); if (!$link) { die('Not connected : ' . mysql_error()); } $db_selected = mysql_select_db('db', $link); $max_width = 120; $max_height = 72; $path ="/home/rootuser/public_html/temp/"; $qry="select id, input_file, output_file from videos where thumbnail='' or thumbnail is null;"; $res=mysql_query($qry); while($row = mysql_fetch_array($res,MYSQL_ASSOC)) { $orig_str = array(" "); $rep_str = array("\ "); $outfile = $row[output_file]; // $infile = $row[input_file]; $infile1 = str_replace($orig_str, $rep_str, $outfile); $tmp = explode(".",$infile1); $tmp_name = $tmp[0]; $imgname = $tmp_name.".png"; $srcfile = "/home/rootuser/public_html/uploaded_vids/".$outfile; echo exec("ffmpeg -i ".$srcfile." -r 1 -ss 00:00:05 -f image2 -s 120x72 ".$path.$imgname); $nname = "./temp/".$imgname; $fileo = fopen($nname,"rb"); if($fileo) { $imgData = addslashes(file_get_contents($nname)); echo $imgdata; $qryy="update videos set thumbnail='{$imgData}' where input_file='$outfile'"; $ress=mysql_query($qryy); } else echo "Could not open<br><br>"; unlink('$nname'); } ?>

    Read the article

  • How to combine ASCII text files, then encrypt, then decrypt, and put into a 'File' Class? C++

    - by Joel
    For example, if I have three ASCII files: file1.txt file2.txt file3.txt ...and I wanted to combine them into one encrypted file: database.txt Then in the application I would decrypt the database.txt and put each of the original files into a 'File' class on the heap: class File{ public: string getContents(); void setContents(string data); private: string m_data; }; Is there some way to do this? Thanks

    Read the article

  • How can i Rollback for files/folders corresponding to the changes done?

    - by OM The Eternity
    I am using PHP and Mysql I have PHP script in which I rollback all the data in the database such data all the old value be reset to the database if update is done, and all new value gets deleted if new insert has been done. Now my goal is to perform the same process with files/folders associated with the changes done, I am not able to create an idea for doing the rollback job with the files/folders associated with the Changes.. So can anyone of u help me or direct me to get the best idea?????

    Read the article

  • Understanding and Controlling Parallel Query Processing in SQL Server

    Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them.

    Read the article

  • SQL Pre-Con…at the Beach

    - by Argenis
      Building upon the success of SQL Rally 2012 (where we packed a room full of DBAs), my friend Robert Davis [Twitter|Blog] and yours truly will be again delivering our day-long Pre-Conference “Demystifying Database Administration Best Practices” this Friday (6/8/2012) – right before SQLSaturday #132 in Pensacola, FL. If you are in the vicinity of Pensacola, come join us! We had tons of fun at Rally. Robert and I love sharing tips and stories that will help you on your day to day duties as a DBA. Some of the topics that we’ll touch on (this is by no means a comprehensive list) Active Directory configuration for SQL Server Deployments Windows Server Deployments Storage and I/O High Availability / Disaster Recovery / Business Continuity Replication Day-To-Day Operations Maintenance TempDB Code Reviews Other Database and Server Settings   Follow this link to sign up for the Pre-Con at Pensacola: http://demystifyingdba.eventbrite.com/ Here’s a blog post that Robert made on the subject of Best Practices.  Hope to see you there!

    Read the article

  • SQL Pre-Con…at the Beach

    - by Argenis
      Building upon the success of SQL Rally 2012 (where we packed a room full of DBAs), my friend Robert Davis [Twitter|Blog] and yours truly will be again delivering our day-long Pre-Conference “Demystifying Database Administration Best Practices” this Friday (6/8/2012) – right before SQLSaturday #132 in Pensacola, FL. If you are in the vicinity of Pensacola, come join us! We had tons of fun at Rally. Robert and I love sharing tips and stories that will help you on your day to day duties as a DBA. Some of the topics that we’ll touch on (this is by no means a comprehensive list) Active Directory configuration for SQL Server Deployments Windows Server Deployments Storage and I/O High Availability / Disaster Recovery / Business Continuity Replication Day-To-Day Operations Maintenance TempDB Code Reviews Other Database and Server Settings   Follow this link to sign up for the Pre-Con at Pensacola: http://demystifyingdba.eventbrite.com/ Here’s a blog post that Robert made on the subject of Best Practices.  Hope to see you there!

    Read the article

  • How to Sync Specific Folders With Dropbox

    - by Matthew Guay
    Would you like to sync specific folders with Dropbox instead of automatically syncing all of your folders to all of your computers?  Here’s how using Selective Sync available in the latest Beta version of Dropbox. Dropbox is a great tool for keeping your important files synced between your computers, and we have covered many interesting things you can do with your Dropbox account.  But until now, there was no way to only sync certain folders with each computer; it was all or nothing.  This could be frustrating if you wanted to store large files from one computer but didn’t want them on a computer with a smaller hard drive.  The latest Beta version of Dropbox allows you to selectively choose which folders to sync between computers. Please Note: This feature is currently only available in the 0.8 beta version of Dropbox. Setup the new Beta Download the new beta version of Dropbox 0.8 (link below); choose the correct download for your system.  Run the installer as normal. It only took a couple seconds to install, though it made the taskbar disappear briefly at the end of the installation on our tests.  Strangely, the installer doesn’t let you know it’s finished installing; if you already had a previous version of Dropbox installed, it will simply start working from your system tray as before.  If this is a new installation of Dropbox, you will be asked to enter your Dropbox account info or create a new account.   Selectively Sync Folders By default, Dropbox will still sync all of your Dropbox folders to all of your computers.  Once this beta is installed, you can choose individual folders or subfolders you don’t want to sync.  Right-click the Dropbox icon in your system tray and select Preferences. Click the Advanced tab on the top, and then click the new Selective Sync button. Now uncheck any folders you don’t want to sync to this computer.  These folders will still exist on your other machines and in the Dropbox web interface, but they will not be downloaded to this computer. The default view only shows your top-level folders in your Dropbox account.  If you wish to sync certain folders but exclude their subfolders, click the Switch to Advanced View button.   Expand any folder and uncheck any subfolders you don’t want to sync.  Notice that the parent folder’s check box is filled now, showing that it is partially synced. Click OK when you’ve made the changes you want.  Dropbox will then make sure you know these folders will stop syncing to this computer; click OK again if you’re sure you don’t want to sync these folders.   Dropbox will cleanup your folder and remove the files and folders you don’t want synced.   Next time you open your Dropbox folder, you’ll notice that the folders we unchecked are no longer in this computer’s Dropbox folder.  They are still in our Dropbox online account, and on any other computers we’re syncing with. If you add a new folder with the same name as a folder you stopped syncing, you’ll notice a grey minus icon over the folder.  This folder will not sync with your other computers or your online Dropbox account. If you want to add these folders back to this computer’s Dropbox, just repeat the steps, this time checking the folders you want to sync.  If you have any folders that were not syncing before, their names will have (Selective Sync Conflict) added to the end, and will sync with all of your computers. Conclusion We’re excited that we can now choose exactly which folders we want synced on each computer.  Since everything is still synced with the online Dropbox, we can still access any of the folders from anywhere.  This makes your Dropbox much more versatile, and can help you keep the folders synced exactly the way you want. Links Download the new Dropbox 0.8.64 beta Signup for Dropbox Similar Articles Productive Geek Tips Add "My Dropbox" to Your Windows 7 Start MenuSync Your Pidgin Profile Across Multiple PCs with DropboxUser Guide to Dropbox Shared FoldersUse Any Folder For Your Ubuntu Desktop (Even a Dropbox Folder)Shut Down or Reboot a Solaris System TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >