Search Results

Search found 62606 results on 2505 pages for 'sql files'.

Page 443/2505 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • force unzip to also delete any missing files

    - by Magnus
    Currently when I unzip into a directory with pre-existing files, I sometimes unzip an archive to update the files, using -f or -u or -o to overwrite any clashes. However I would like the unzip process to also delete any files which were not part of the archive, so that the unzipped version fully matches what was in the zipped archive. (Why not just replace the directory then with a fresh unzip? Because I still want to preserve .svn files, just wipe everything else)

    Read the article

  • mpasdlta files -- what are they?

    - by Tmdean
    I noticed a bunch of folders in the root of my hard drive named with a string of hex digits that contain files named with a GUID ending with "mpasdlta.vdm" and "mpavdlta.vdm". From some Googling, I've determined that these files are spyware and virus definition files used by Microsoft Security Essentials. Are these files safe to delete? (Why doesn't Microsoft follow their own guidelines and store application data in the folders intended for that purpose? grumble grumble)

    Read the article

  • Open dialog box won't show files in libraries, but explorer will

    - by Alex
    I have the weirdest problem when trying to open or save files. When I try to get to "My Documents" through the "Libraries" side link it won't show any of my files. It will show them if I go around from the C:// drive into the user files though. I thought it was because I didn't have the right location defined for the "Libraries" shortcut, but when I use "Explorer" to open my "Libraries" it shows all the files. Any ideas?

    Read the article

  • Process keeps creating dump files

    - by Pieter
    We have a Delphi application running on a terminal server that keeps generating dump files. For the same PID, it keeps creating dump files with an interval of around 1 second until the process is killed manually. Another weird thing is the name of the dump files: ±_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_40.dmp ÷_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_42.dmp k_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_39.dmp Ô_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_41.dmp Ž_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_40.dmp The dump files aren't telling us much and we would like to have a suggestion where we should start looking.

    Read the article

  • How can I transfer metadata from several flac files to aac (m4a) files?

    - by abckookooman
    Suppose I have two folders, dir1 and dir2, with deveral files in each of them, and all the files in dir1 are named like "ExampleFileName.flac" and all the files in dir2 are named as "ExampleFileName.m4a" - basically their names are the same except the extension. What I need to do is transfer all of the metadata for each of the files somehow - even though their codecs are different. It would be great if I can do this via command line, but anything is appreciated. Thank you.

    Read the article

  • Does Windows 7 deleted files generated during hibernation?

    - by Koffeehaus
    Somebody was using my Windows 7 and she hibernated it instead of shutting down. Later, I booted up Ubuntu and moved about 2GB worth of files from the Ubuntu partition to the Windows partition. After booting up Windows (from hibernation), I couldn't find any of the files. Then I restarted the PC, and the files showed for a second or two and then disappeared. Did Windows delete all the files I put on it while it was hibernating?

    Read the article

  • Rescuing files and commits from "no branch" in git

    - by Xeoncross
    I started working on some files I had in a git submodule under another project. However, since it was a git submodule it never checked out "master" and instead just checked out the head and placed all the files in the folder in "no branch". Now that I've made some changes by accident to these files I just realized that I was working in a "no branch", submodule of my project. How do I get those files into a branch (like master) so I can rescue them?

    Read the article

  • Robocopy. Delete files from source

    - by kurresmack
    Hey, We copied all our files to a new storage server recently. We didn't want to move at the time becuase we werent sure if files would get lost. The problem now is that we have files on both places! How can we move only the files that does not exists in the target and for those that exists in both places we delete it from the source? it is windows server 2008 Thanks

    Read the article

  • Availability of big files on multiple servers

    - by Imises
    I have to handle many (1'000 - 30'000) big files ranging from 200MB up to 2GB. The demand for these files is variable (0 - 300 downloads / file). This is why a single file must saved on 2 or more servers. My servers are placed in different datacenters (France), with different size HDDs (750GB to 4TB). Currently I share the files using PHP and ncftpget / ncftpput, but it's very slow. I need a solution to handle balancing these files across 7+ servers.

    Read the article

  • webscraper grabbing images, but not entering info into database

    - by Jason
    Hello, again. I'm having more issues with my script entering info into my database. The script below grabs a page, strips down the necessary info, then downloads the related image file. After that, it is supposed to enter the information gleaned from the URL into the database. For some reason, the script seems to iterate through the URLs, as I get downloaded images for each URL, but each URL's product is not entered into the database. The script will insert the first product's categories and product info, and then it just stops, and continues to download images. Any suggestions? <?php define('IN_PHPBB', true); $phpbb_root_path = (defined('PHPBB_ROOT_PATH')) ? PHPBB_ROOT_PATH : './'; $phpEx = substr(strrchr(__FILE__, '.'), 1); include($phpbb_root_path . 'common.' . $phpEx); include($phpbb_root_path . 'includes/simple_html_dom.' . $phpEx); // Start session management $user->session_begin(); $auth->acl($user->data); $user->setup(); set_time_limit(259200); function save($in, $out) { $ch = curl_init ($in); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_BINARYTRANSFER,1); $rawdata=curl_exec($ch); curl_close ($ch); if(file_exists($out)) { unlink($out); } $fp = fopen($out,'x'); fwrite($fp, $rawdata); fclose($fp); } function scrape($i) { $url = 'http:/xxxxxxxx/index.php?main_page=product_info&products_id='.$i.'&zenid=e4b7dde8de02e1df005d4549e2e3e529'; echo "$url -- "; $exists = file_get_contents($url); if ($exists != false) { $html = file_get_html($url); foreach($html->find('body') as $html) { $test = $html->find('#productName', 0); if ($test) { $item['title'] = trim($html->find('#productName', 0)->plaintext); $item['price'] = trim($html->find('#productPrices', 0)->plaintext); $item['cat'] = $html->find('#navBreadCrumb', 0)->plaintext; list($home, $item['cat'], $item['subcat'], $title) = explode("::", $item['cat']); $item['cat'] = str_replace("&nbsp;", "", $item['cat']); $item['subcat'] = str_replace("\n", "", str_replace("&nbsp;", "", $item['subcat'])); $item['desc'] = trim($html->find('#productDescription', 0)->plaintext); $item['model'] = $html->find('ul#productDetailsList', 0)->find('li', 0)->plaintext; $item['model'] = explode(":", $item['model']); $item['model'] = trim($item['model'][1]); $item['manufacturer'] = $html->find('ul#productDetailsList', 0)->find('li', 1)->plaintext; $item['manufacturer'] = explode(":", $item['manufacturer']); $item['manufacturer'] = trim($item['manufacturer'][1]); foreach($html->find('img') as $img) { if($img->alt == $item['title']) { $item['img_sm'] = $img->src; } } $ret[] = $item; } } $html->clear(); unset($html); unset($item); return $ret; } else { echo "Could not find page<br />"; } unset($exists); } $i = 1; $end = 9999999; while($i < $end) { $ret = scrape($i); if(isset($ret)) { foreach($ret as $v) { $item['title'] = $v['title']; $item['price'] = $v['price']; $item['desc'] = $v['desc']; $item['model'] = $v['model']; $item['manufacturer'] = $v['manufacturer']; $item['image'] = $v['image']; $item['cat'] = $v['cat']; $item['subcat'] = $v['subcat']; $item['img_sm'] = $v['img_sm']; } unset($ret); unset($v); $sm_img_src = "http://xxxxxx/".$item['img_sm']; $ext = strrchr($item['img_sm'], '.'); $filename = $item['model'] . $ext; $lg_img_src = "http://xxxxx/images/STC/".$filename; $new_sm = "./rip_images/small/{$filename}"; $new_lg = "./rip_images/large/{$filename}"; $item['image'] = $filename; save($lg_img_src,$new_lg); save($sm_img_src,$new_sm); //see if parent cat exists $sql = 'SELECT cat_id FROM ' . SHOP_CAT_TABLE . ' WHERE cat_name = "'.$db->sql_escape($item['cat']).'"'; $result = $db->sql_query($sql); $parent = $db->sql_fetchrow($result); $db->sql_freeresult($result); // if not exists if($parent['cat_id'] == '') { //add the parent cat to the db $sql_ary = array( 'cat_name' => $item['cat'], 'cat_parent' => 0 ); $sql = 'INSERT INTO '.SHOP_CAT_TABLE.' '.$db->sql_build_array('INSERT', $sql_ary); $db->sql_query($sql); $cat_id = $db->sql_nextid(); //see if subcat exists $sql = 'SELECT cat_id FROM ' . SHOP_CAT_TABLE . ' WHERE cat_name = "'.$db->sql_escape($item['subcat']).'"'; $result = $db->sql_query($sql); $row = $db->sql_fetchrow($result); $db->sql_freeresult($result); // if not exists if($row['cat_id'] == '') { //add subcat to db $sql_ary = array( 'cat_name' => $db->sql_escape($item['subcat']), 'cat_parent' => $cat_id ); $sql = 'INSERT INTO '.SHOP_CAT_TABLE.' '.$db->sql_build_array('INSERT', $sql_ary); $db->sql_query($sql); $item_cat = $db->sql_nextid(); } else //if exists { $item_cat = $row['cat_id']; } } else //if parent cat exists { //see if subcat exists $sql = 'SELECT cat_id FROM ' . SHOP_CAT_TABLE . ' WHERE cat_name = "'.$db->sql_escape($item['subcat']).'"'; $result = $db->sql_query($sql); $row = $db->sql_fetchrow($result); $db->sql_freeresult($result); // if not exists if($row['cat_id'] == '') { //add the subcat to the db $sql_ary = array( 'cat_name' => $db->sql_escape($item['subcat']), 'cat_parent' => $parent['cat_id'] ); $sql = 'INSERT INTO '.SHOP_CAT_TABLE.' '.$db->sql_build_array('INSERT', $sql_ary); $db->sql_query($sql); $item_cat = $db->sql_nextid(); } else //if exists { $item_cat = $row['cat_id']; } } $sql_ary = array( 'item_title' => $db->sql_escape($item['title']), 'item_price' => $db->sql_escape($item['price']), 'item_desc' => $db->sql_escape($item['desc']), 'item_model' => $db->sql_escape($item['model']), 'item_manufacturer' => $db->sql_escape($item['manufacturer']), 'item_image' => $db->sql_escape($item['image']), 'item_cat' => $db->sql_escape($item_cat) ); $sql = 'INSERT INTO ' . SHOP_ITEM_TABLE . ' ' . $db->sql_build_array('INSERT', $sql_ary); $db->sql_query($sql); garbage_collection(); echo 'Done<br />'; } $i++; unset($item); } ?>

    Read the article

  • Where is my app.config for SSIS?

    Sometimes when working with SSIS you need to add or change settings in the .NET application configuration file, which can be a bit confusing when you are building a SSIS package not an application. First of all lets review a couple of examples where you may need to do this. You are using referencing an assembly in a Script Task that uses Enterprise Library (aka EntLib), so you need to add the relevant configuration sections and settings, perhaps for the logging application block. You are using using Enterprise Library in a custom task or component, and again you need to add the relevant configuration sections and settings. You are using a web service with Microsoft Web Services Enhancements (WSE) 3.0 and hosting the proxy in SSIS, in an assembly used by your package, and need to add the configuration sections and settings. You need to change behaviours of the .NET framework which can be influenced by a configuration file, such as the System.Net.Mail default SMTP settings. Perhaps you wish to configure System.Net and the httpWebRequest header for parsing unsafe header (useUnsafeHeaderParsing), which will change the way the HTTP Connection manager behaves. You are consuming a WCF service and wish to specify the endpoint in configuration. There are no doubt plenty more examples but each of these requires us to identify the correct configuration file and and make the relevant changes. There are actually several configuration files, each used by a different execution host depending on how you are working with the SSIS package. The folders we need to look in will actually vary depending on the version of SQL Server as well as the processor architecture, but most are all what we can call the Binn folder. The SQL Server 2005 Binn folder is at C:\Program Files\Microsoft SQL Server\90\DTS\Binn\, compared to C:\Program Files\Microsoft SQL Server\100\DTS\Binn\ for SQL Server 2008. If you are on a 64-bit machine then you will see C:\Program Files (x86)\Microsoft SQL Server\90\DTS\Binn\ for the 32-bit executables and C:\Program Files\Microsoft SQL Server\90\DTS\Binn\ for 64-bit, so be sure to check all relevant locations. Of course SQL Server 2008 may have a C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\ on a 64-bit machine too. To recap, the version of SQL Server determines if you look in the 90 or 100 sub-folder under SQL Server in Program Files (C:\Program Files\Microsoft SQL Server\nn\) . If you are running a 64-bit operating system then you will have two instances program files, C:\Program Files (x86)\ for 32-bit and  C:\Program Files\ for 64-bit. You may wish to check both depending on what you are doing, but this is covered more under each section below. There are a total of five specific configuration files that you may need to change, each one is detailed below: DTExec.exe.config DTExec.exe is the standalone command line tool used for executing SSIS packages, and therefore it is an execution host with an app.config file. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTExec.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. DtsDebugHost.exe.config DtsDebugHost.exe is the execution host used by Business Intelligence Development Studio (BIDS) / Visual Studio when executing a package from the designer in debug mode, which is the default behaviour. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DtsDebugHost.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. This may surprise some people as Visual Studio is only 32-bit, but thankfully the debugger supports both. This can be set in the project properties, see the Run64BitRuntime property (true or false) in the Debugging pane of the Project Properties. dtshost.exe.config dtshost.exe is the execution host used by what I think of as the built-in features of SQL Server such as SQL Server Agent e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\dtshost.exe.config This file can be found in both the 32-bit and 64-bit Binn folders devenv.exe.config Something slightly different is devenv.exe which is Visual Studio. This configuration file may also need changing if you need a feature at design-time such as in a Task Editor or Connection Manager editor. Visual Studio 2005 for SQL Server 2005  - C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\devenv.exe.config Visual Studio 2008 for SQL Server 2008  - C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe.config Visual Studio is only available for 32-bit so on a 64-bit machine you will have to look in C:\Program Files (x86)\ only. DTExecUI.exe.config The DTExec UI tool can also have a configuration file and these cab be found under the Tools folders for SQL Sever as shown below. C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe A configuration file may not exist, but if you can find the matching executable you know you are in the right place so can go ahead and add a new file yourself. In summary we have covered the assembly configuration files for all of the standard methods of building and running a SSIS package, but obviously if you are working programmatically you will need to make the relevant modifications to your program’s app.config as well.

    Read the article

  • Where is my app.config for SSIS?

    Sometimes when working with SSIS you need to add or change settings in the .NET application configuration file, which can be a bit confusing when you are building a SSIS package not an application. First of all lets review a couple of examples where you may need to do this. You are using referencing an assembly in a Script Task that uses Enterprise Library (aka EntLib), so you need to add the relevant configuration sections and settings, perhaps for the logging application block. You are using using Enterprise Library in a custom task or component, and again you need to add the relevant configuration sections and settings. You are using a web service with Microsoft Web Services Enhancements (WSE) 3.0 and hosting the proxy in SSIS, in an assembly used by your package, and need to add the configuration sections and settings. You need to change behaviours of the .NET framework which can be influenced by a configuration file, such as the System.Net.Mail default SMTP settings. Perhaps you wish to configure System.Net and the httpWebRequest header for parsing unsafe header (useUnsafeHeaderParsing), which will change the way the HTTP Connection manager behaves. You are consuming a WCF service and wish to specify the endpoint in configuration. There are no doubt plenty more examples but each of these requires us to identify the correct configuration file and and make the relevant changes. There are actually several configuration files, each used by a different execution host depending on how you are working with the SSIS package. The folders we need to look in will actually vary depending on the version of SQL Server as well as the processor architecture, but most are all what we can call the Binn folder. The SQL Server 2005 Binn folder is at C:\Program Files\Microsoft SQL Server\90\DTS\Binn\, compared to C:\Program Files\Microsoft SQL Server\100\DTS\Binn\ for SQL Server 2008. If you are on a 64-bit machine then you will see C:\Program Files (x86)\Microsoft SQL Server\90\DTS\Binn\ for the 32-bit executables and C:\Program Files\Microsoft SQL Server\90\DTS\Binn\ for 64-bit, so be sure to check all relevant locations. Of course SQL Server 2008 may have a C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\ on a 64-bit machine too. To recap, the version of SQL Server determines if you look in the 90 or 100 sub-folder under SQL Server in Program Files (C:\Program Files\Microsoft SQL Server\nn\) . If you are running a 64-bit operating system then you will have two instances program files, C:\Program Files (x86)\ for 32-bit and  C:\Program Files\ for 64-bit. You may wish to check both depending on what you are doing, but this is covered more under each section below. There are a total of five specific configuration files that you may need to change, each one is detailed below: DTExec.exe.config DTExec.exe is the standalone command line tool used for executing SSIS packages, and therefore it is an execution host with an app.config file. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTExec.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. DtsDebugHost.exe.config DtsDebugHost.exe is the execution host used by Business Intelligence Development Studio (BIDS) / Visual Studio when executing a package from the designer in debug mode, which is the default behaviour. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DtsDebugHost.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. This may surprise some people as Visual Studio is only 32-bit, but thankfully the debugger supports both. This can be set in the project properties, see the Run64BitRuntime property (true or false) in the Debugging pane of the Project Properties. dtshost.exe.config dtshost.exe is the execution host used by what I think of as the built-in features of SQL Server such as SQL Server Agent e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\dtshost.exe.config This file can be found in both the 32-bit and 64-bit Binn folders devenv.exe.config Something slightly different is devenv.exe which is Visual Studio. This configuration file may also need changing if you need a feature at design-time such as in a Task Editor or Connection Manager editor. Visual Studio 2005 for SQL Server 2005  - C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\devenv.exe.config Visual Studio 2008 for SQL Server 2008  - C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe.config Visual Studio is only available for 32-bit so on a 64-bit machine you will have to look in C:\Program Files (x86)\ only. DTExecUI.exe.config The DTExec UI tool can also have a configuration file and these cab be found under the Tools folders for SQL Sever as shown below. C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe A configuration file may not exist, but if you can find the matching executable you know you are in the right place so can go ahead and add a new file yourself. In summary we have covered the assembly configuration files for all of the standard methods of building and running a SSIS package, but obviously if you are working programmatically you will need to make the relevant modifications to your program’s app.config as well.

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • Linq 2 SQL One to Zero or One relationship possible?

    - by Mr. Flibble
    Is it possible to create a one to zero or one relationship in Linq2SQL? My understanding is that to create a one to one relationship you create a FK relationship on the PK of each table. But you cannot make the PK nullable, so I don't see how to make a one to zero or one relationship work? I'm using the designer to automatically create the model - so I would like to know how to set up the SQL tables to induce the relationship - not some custom ORM code.

    Read the article

  • Django custom SQL returning single row of results when query returns 2?

    - by Alvin
    I have a custom SQL call that is returning different results to the template than I get when I run the same query against the database directly, 1 row vs 2 Query - copied from Django Debug Toolbar: SELECT ((Sum(new_recruit_interviews) / Sum(opportunities_offered)) * 100) as avg_recruit, ((Sum(inspections) / Sum(presentations)) * 100) as avg_inspect, ((Sum(contracts_signed) / Sum(roof_approvals)) * 100) as avg_contracts, ((Sum(adjusters) / Sum(contracts_signed)) * 100) as avg_adjusters, ((Sum(roof_approvals) / Sum(adjusters)) *100) as roof_approval_avg, ((Sum(roof_turned_in) / Sum(adjusters)) * 100) as roof_jobs_avg, Sum(roof_turned_in) as roof_jobs_total, ((Sum(siding_approvals) / Sum(adjusters)) *100) as siding_approval_avg, ((Sum(siding_turned_in) / Sum(adjusters)) * 100) as siding_jobs_avg, Sum(siding_turned_in) as siding_jobs_total, ((Sum(gutter_approvals) / Sum(adjusters)) *100) as gutter_approval_avg, ((Sum(gutter_turned_in) / Sum(adjusters)) * 100) as gutter_jobs_avg, Sum(gutter_turned_in) as gutter_jobs_total, ((Sum(window_approvals) / Sum(adjusters)) *100) as window_approval_avg, ((Sum(window_turned_in) / Sum(adjusters)) * 100) as window_jobs_avg, Sum(window_turned_in) as window_jobs_total, (Sum(roof_turned_in) + Sum(siding_turned_in) + Sum(gutter_turned_in) + Sum(window_turned_in)) as total_jobs, (((Sum(collections_jobs_new) + Sum(collections_jobs_previous)) / (Sum(roof_turned_in) + Sum(siding_turned_in) + Sum(gutter_turned_in) + Sum(window_turned_in))) * 100) as total_collections, sales_report_salesmen.location_id as detail_id, business_unit_location.title as title FROM sales_report_salesmen Inner Join auth_user ON sales_report_salesmen.user_id = auth_user.id Inner Join business_unit_location ON sales_report_salesmen.location_id = business_unit_location.id GROUP BY location_id Results from direct query running the above query: INSERT INTO `` (`avg_recruit`, `avg_inspect`, `avg_contracts`, `avg_adjusters`, `roof_approval_avg`, `roof_jobs_avg`, `roof_jobs_total`, `siding_approval_avg`, `siding_jobs_avg`, `siding_jobs_total`, `gutter_approval_avg`, `gutter_jobs_avg`, `gutter_jobs_total`, `window_approval_avg`, `window_jobs_avg`, `window_jobs_total`, `total_jobs`, `total_collections`, `detail_id`, `title`) VALUES (95.3968, 92.8178, 106.9622, 90.2928, 103.5420, 103.5670, 4152, 100.2494, 106.8845, 4285, 120.1297, 86.2559, 3458, 92.9658, 106.1611, 4256, 16151, 4.281469, 12, 'St Paul, MN'); VALUES (90.2982, 73.3723, 97.8474, 104.5433, 97.7585, 86.1848, 1884, 109.9268, 109.3321, 2390, 81.0156, 96.4318, 2108, 91.7200, 123.8792, 2708, 9090, 4.531573, 13, 'Denver, CO'); Results from template: {'roof_jobs_total': Decimal('4152'), 'gutter_jobs_total': Decimal('3458'), 'avg_adjusters': Decimal('90.2928'), 'title': u'St Paul, MN', 'window_approval_avg': Decimal('92.9658'), 'total_collections': Decimal('4.281469'), 'gutter_approval_avg': Decimal('120.1297'), 'avg_recruit': Decimal('95.3968'), 'siding_approval_avg': Decimal('100.2494'), 'window_jobs_total': Decimal('4256'), 'detail_id': 12L, 'siding_jobs_avg': Decimal('106.8845'), 'avg_inspect': Decimal('92.8178'), 'roof_approval_avg': Decimal('103.5420'), 'roof_jobs_avg': Decimal('103.5670'), 'total_jobs': Decimal('16151'), 'window_jobs_avg': Decimal('106.1611'), 'avg_contracts': Decimal('106.9622'), 'gutter_jobs_avg': Decimal('86.2559'), 'siding_jobs_total': Decimal('4285')} Tried tweaking it a few ways and running the results through various for loops, keep getting the same result where my results are a single row through the Django template and the expected results (through console) have 2 rows The row that is coming back is the same as the first row returned through the console query so I'm thinking that it is running correctly just a matter of passing the results through... for good measure this is the code I'm using to generate the query (yes it's a bit ugly, been playing with it) def sql_grouped(table, fields, group_by=False, where=False): from django.db import connection query = 'SELECT %s FROM %s' % (fields, table) if where: query = query + ' WHERE %s' % (where) if group_by: query = query + ' GROUP BY %s' % (group_by) cursor = connection.cursor() cursor.execute(query) desc = cursor.description data = [dict(zip([col[0] for col in desc], row)) for row in cursor.fetchall()] return data[0] any feedback is greatly appreciated - been tinkering with this since I realized I could skip a few steps by generating my averages directly within the SQL rather than post-process

    Read the article

  • Performance using T-SQL PIVOT vs SSIS PIVOT Transformation Component.

    - by Nev_Rahd
    Hi I am in process of building Dimension from EDW (source), wherein I need to pivot columns of source to load Dimension. Currently most of the pivoting stuff am doing is by using T-SQL PIVOT which further get used in my SSIS package to merge with Dim table This pivoting can also be achieved by SSIS PIVOT Transformation component. In regards to Performance which approach would be the best? Thanks

    Read the article

  • Is there some way to access Sql server from z/OS mainframe and have the result in IBM 3270 terminal

    - by systempuntoout
    I tagged this question "impossible" because after a lot of googling, i have not find any trace\reference to a possible answer. I'm asking if there is some way\dirtytrick (possibly cheap) to access Microsoft Sql Server from z/OS mainframe (COBOL programs) and have the result in 3270 terminal emulation; i know that 3270 is a pretty old system, but in bank CED, is still very popular.

    Read the article

  • SqlMetal, Sql Server 2008 database, Table with HierachyID, dal cs file is created sometimes ?

    - by judek.mp
    I have 2 databases with a 2 tables with HierachyID fields. For one database I can get a dal cs file, for the other database I cannot get a dal cs file ? HBus is a database I can get the dal cs for, ... SqlMetal /server:.\SQLSERVER2008 /database:HBus /code:HBusDC.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext This kicks me out a file, ... which works, but excludes the HierarchyID field for the table and includes all other fields for that table. This is OK I do not mind. The above cmd line kicks out an warning but still produces a file, like so SqlMetal /server:.\SQLSERVER2008 /database:HBus /code:HBusDC.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext Microsoft (R) Database Mapping Generator 2008 version 1.00.30729 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved. Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.HMsg' from SqlServer because the column's DbType is a user-defined type (UDT). Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.vwHMsg' from SqlServer because the column's DbType is a user-defined type (UDT). HMsg is a table with a HierarchyID field. I have another database, Elf, almost the same thing but I get a warning and an Error when using sql metal and I do not get a dal cs file ... SqlMetal /server:.\SQLSERVER2008 /database:Elf /code:ElfDataContextDal.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext An error as well as the warning and the cs file fails to appear on my disc, ... :-( SqlMetal /server:.\SQLSERVER2008 /database:Elf /code:ElfDataContextDal.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext Microsoft (R) Database Mapping Generator 2008 version 1.00.30729 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved. Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.EntityLink' from SqlServer because the column's DbType is a user-defined type (UDT). Error : Requested value 'ELF.SYS.HIERARCHYID' was not found. The fields are declared the same way in Elf db OrgNode [HierarchyID] null , in HBus db ... OrgNode [HierarchyID] null , Both databases are in the same instance of sql server 2008, so the HierarchyID is an inbuilt type, neither db has HierarchyID udt ,... cheers in advance for any replies ...

    Read the article

  • Is there a compact or express SQL Server version that I can package with my WinForms app & is free?

    - by Greg
    Hi, Is there a light weight version of SQL Server I could use that has the characteristics of: Free (assuming my winforms app is semi-commercial) Can be seemlessly packaged for deployment as part of the winforms click-once application? (i.e. ease in installation for the user). Light weight for the user (ideally something that just runs when the winforms application that uses it is running - but better than using XML sererialization for persistance). Thanks

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >