Search Results

Search found 89638 results on 3586 pages for 'file table'.

Page 15/3586 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Resuming File Downloads in Ruby on Rails

    - by jaycode
    Hi, this has been asked here: http://stackoverflow.com/questions/1840413/resuming-file-downloads-in-ruby-on-rails-range-header-support But there was no answer. I am having similar problem, could anybody help, please? Thanks before. Alright I am getting close. Seems like I need to setup header Content-Length or Content-Range, as described here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.13. Haven't got an idea how. Anybody knows? Jay response.header["Content-Range"] = "20000-#{size}" send_file "#{Dir.pwd}/products/filename.zip", :type => 'application/zip', :size => (size - 20000) doesn't work

    Read the article

  • Join and sum not compatible matrices through data.table

    - by leodido
    My goal is to "sum" two not compatible matrices (matrices with different dimensions) using (and preserving) row and column names. I've figured this approach: convert the matrices to data.table objects, join them and then sum columns vectors. An example: > M1 1 3 4 5 7 8 1 0 0 1 0 0 0 3 0 0 0 0 0 0 4 1 0 0 0 0 0 5 0 0 0 0 0 0 7 0 0 0 0 1 0 8 0 0 0 0 0 0 > M2 1 3 4 5 8 1 0 0 1 0 0 3 0 0 0 0 0 4 1 0 0 0 0 5 0 0 0 0 0 8 0 0 0 0 0 > M1 %ms% M2 1 3 4 5 7 8 1 0 0 2 0 0 0 3 0 0 0 0 0 0 4 2 0 0 0 0 0 5 0 0 0 0 0 0 7 0 0 0 0 1 0 8 0 0 0 0 0 0 This is my code: M1 <- matrix(c(0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0), byrow = TRUE, ncol = 6) colnames(M1) <- c(1,3,4,5,7,8) M2 <- matrix(c(0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0), byrow = TRUE, ncol = 5) colnames(M2) <- c(1,3,4,5,8) # to data.table objects DT1 <- data.table(M1, keep.rownames = TRUE, key = "rn") DT2 <- data.table(M2, keep.rownames = TRUE, key = "rn") # join and sum of common columns if (nrow(DT1) > nrow(DT2)) { A <- DT2[DT1, roll = TRUE] A[, list(X1 = X1 + X1.1, X3 = X3 + X3.1, X4 = X4 + X4.1, X5 = X5 + X5.1, X7, X8 = X8 + X8.1), by = rn] } That outputs: rn X1 X3 X4 X5 X7 X8 1: 1 0 0 2 0 0 0 2: 3 0 0 0 0 0 0 3: 4 2 0 0 0 0 0 4: 5 0 0 0 0 0 0 5: 7 0 0 0 0 1 0 6: 8 0 0 0 0 0 0 Then I can convert back this data.table to a matrix and fix row and column names. The questions are: how to generalize this procedure? I need a way to automatically create list(X1 = X1 + X1.1, X3 = X3 + X3.1, X4 = X4 + X4.1, X5 = X5 + X5.1, X7, X8 = X8 + X8.1) because i want to apply this function to matrices which dimensions (and row/columns names) are not known in advance. In summary I need a merge procedure that behaves as described. there are other strategies/implementations that achieve the same goal that are, at the same time, faster and generalized? (hoping that some data.table monster help me) to what kind of join (inner, outer, etc. etc.) is assimilable this procedure? Thanks in advance. p.s.: I'm using data.table version 1.8.2 EDIT - SOLUTIONS @Aaron solution. No external libraries, only base R. It works also on list of matrices. add_matrices_1 <- function(...) { a <- list(...) cols <- sort(unique(unlist(lapply(a, colnames)))) rows <- sort(unique(unlist(lapply(a, rownames)))) out <- array(0, dim = c(length(rows), length(cols)), dimnames = list(rows,cols)) for (m in a) out[rownames(m), colnames(m)] <- out[rownames(m), colnames(m)] + m out } @MadScone solution. Used reshape2 package. It works only on two matrices per call. add_matrices_2 <- function(m1, m2) { m <- acast(rbind(melt(M1), melt(M2)), Var1~Var2, fun.aggregate = sum) mn <- unique(colnames(m1), colnames(m2)) rownames(m) <- mn colnames(m) <- mn m } BENCHMARK (100 runs with microbenchmark package) Unit: microseconds expr min lq median uq max 1 add_matrices_1 196.009 257.5865 282.027 291.2735 549.397 2 add_matrices_2 13737.851 14697.9790 14864.778 16285.7650 25567.448 No need to comment the benchmark: @Aaron solution wins. I'll continue to investigate a similar solution for data.table objects. I'll add other solutions eventually reported or discovered.

    Read the article

  • NoSQL as file meta database

    - by fga
    I am trying to implement a virtual file system structure in front of an object storage (Openstack). For availability reasons we initially chose Cassandra, however while designing file system data model, it looked like a tree structure similar to a relational model. Here is the dilemma for availability and partition tolerance we need NoSQL, but our data model is relational. The intended file system must be able to handle filtered search based on date, name etc. as fast as possible. So what path should i take? Stick to relational with some indexing mechanism backed by 3 rd tools like Apache Solr or dig deeper into NoSQL and find a suitable model and database satisfying the model? P.S: Currently from NoSQL Cassandra or MongoDB are choices proposed by my colleagues.

    Read the article

  • How to organize my site's file system properly?

    - by Wolfpack'08
    Doing some reading on Stack Overflow, I've found a lot of information suggesting that proper organization of a file system is crucial to a well-written web app. One of the key pieces of evidence is high-frequency references to "separation of concerns" in questions related to keeping programs organized. Now, I've found some information on organizing file systems (Filesystem Hierarchy Standard) from 2004. It raises only two concerns: first, the standard's a bit dated, so I believe it may be possible to do better given the changes in technology over the past 8 years; second, and most important, my application is very small compared to an entire Linux distro. I think that the file system should be organized very differently because of that. Here's what I'm looking at, currently: /scripts, /databases, /www -> /dev, /production -> login, router, admin pages, /sites -> content types, static pages /modules, /includes, /css, /media -> /module-specific-media

    Read the article

  • SQLite join selection from the same table using reference from another table

    - by daikini
    I have two tables: table: points |key_id | name | x | y | ------------------------ |1 | A |10 |20 | |2 | A_1 |11 |21 | |3 | B |30 |40 | |4 | B_1 |31 |42 | table: pairs |f_key_p1 | f_key_p2 | ---------------------- |1 | 2 | |3 | 4 | Table 'pairs' defines which rows in table 'points' should be paired. How can I query database to select paired rows? My desired query result would be like this: |name_1|x_1|x_2|name_2|x_2|y_2| ------------------------------- |A |10 |20 |A_1 |11 |21 | |B |30 |40 |B_1 |31 |41 |

    Read the article

  • <asp:Table> Vs html <table>

    - by keith
    What are the pros and cons between using the ASP.Net control compared to the old reliable table html implementation. I know that the asp:Table will end up on the returned page as a html table, and from looking into it so far people are saying its easier to work with the asp:Table in the server side code, but I'd love to hear what the stackoverflow community has to say about the matter.

    Read the article

  • Does this file format exist?

    - by Jon Chase
    Is there a file format that handles the following use case... I'd like to create a tar file (or whatever - I'm just using tar here b/c it's a well known file format for containing multiple files) that would be usable even if I only had access to specific chunks of said file. For example, say I tar up my mp3 and photo collection into a 100GB tar file, then put the file into some long term storage somewhere. Later, I want to access a specific mp3 file. I don't want to download the entire 100GB tar file just to get to one mp3. In fact, let's say I can't download the entire 100GB tar file. Instead, I'd like to say "give me megabytes 10 through 19 of the 100GB tar file" and then have the mp3 magically extracted from those 10 megabytes. Does a file format like this exist?

    Read the article

  • Foremost custom file type not accepted by -t argument

    - by Channel72
    I'm trying to recover a deleted file on an ext3 file system using the foremost utility. The file I want to recover is a hpp C++ source code file. However, foremost does not automatically support the hpp file extension, so I have to add it to the config file. So, following the instructions on the man page, I add the following line to the config file: hpp n 50000 include include ASCII Then I run foremost as follows: $foremost -v -T -t hpp -i /dev/md0 -o /home/recover/ Instead of doing anything, it just displays the help message. If I change the hpp to htm or jpg, it works. So apparently foremost isn't accepting the custom file type I added into the config file. But I've looked over this dozens of times now, and I can't see what I'm doing wrong. I'm following the instructions exactly. Why doesn't foremost recognize the new file type I added to the config file?

    Read the article

  • Error with SQL Server Setup 2012 on Windows 2012

    - by Jeff
    I am trying to install SQL Server on Windows 2012. I was able to finally get the wizard up and running after making some changes on the server, but now it fails no matter what I do with the following error: TITLE: SQL Server Setup failure. SQL Server Setup has encountered the following error: There is an error in XML document (108, 148).. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&EvtType=0x066FCAFD%25400x5539C151 LinkID: 20476 Product Name: Microsoft SQL Server Message Source setup.rll Message ID: 50000 EvtType: 0x066FCAFD%400x5539C151 What I've tried: Installing from commandline with /q Result from CL installation: Error result: -2147467259 Result facility code: 0 Result error code: 16389 Please review the summary.txt log for further details The Verbose CL installation reveals: Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1033: NotInstalled Sco: File 'C:\SQL Install\1036_FRA_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1036_FRA_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1036: NotInstalled Sco: File 'C:\SQL Install\1040_ITA_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1040_ITA_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1040: NotInstalled Sco: File 'C:\SQL Install\1041_JPN_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1041_JPN_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1041: NotInstalled Sco: File 'C:\SQL Install\1042_KOR_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1042_KOR_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1042: NotInstalled Sco: File 'C:\SQL Install\1046_PTB_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1046_PTB_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1046: NotInstalled Sco: File 'C:\SQL Install\1049_RUS_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1049_RUS_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1049: NotInstalled Sco: File 'C:\SQL Install\2052_CHS_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\2052_CHS_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_2052: NotInstalled Sco: File 'C:\SQL Install\3082_ESN_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\3082_ESN_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_3082: NotInstalled Sco: File 'C:\SQL Install\1053_SVE_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Sco: File 'C:\SQL Install\1053_SVE_LP\x64\setup\x64\sql_bids_loc.msi' does not exist Package ID sql_bids_loc_Cpu64_1053: NotInstalled Sco: File 'C:\SQL Install\x64\setup\x64\sql_ssms.msi' does not exist Sco: File 'C:\SQL Install\x64\setup\x64\sql_ssms.msi' does not exist Package ID sql_ssms_Cpu64: NotInstalled Sco: File 'C:\SQL Install\1028_CHT_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1028_CHT_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1028: NotInstalled Sco: File 'C:\SQL Install\1031_DEU_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1031_DEU_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1031: NotInstalled Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1033: NotInstalled Sco: File 'C:\SQL Install\1036_FRA_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1036_FRA_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1036: NotInstalled Sco: File 'C:\SQL Install\1040_ITA_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1040_ITA_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1040: NotInstalled Sco: File 'C:\SQL Install\1041_JPN_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1041_JPN_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1041: NotInstalled Sco: File 'C:\SQL Install\1042_KOR_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1042_KOR_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1042: NotInstalled Sco: File 'C:\SQL Install\1046_PTB_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1046_PTB_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1046: NotInstalled Sco: File 'C:\SQL Install\1049_RUS_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1049_RUS_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1049: NotInstalled Sco: File 'C:\SQL Install\2052_CHS_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\2052_CHS_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_2052: NotInstalled Sco: File 'C:\SQL Install\3082_ESN_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\3082_ESN_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_3082: NotInstalled Sco: File 'C:\SQL Install\1053_SVE_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Sco: File 'C:\SQL Install\1053_SVE_LP\x64\setup\x64\sql_ssms_loc.msi' does not exist Package ID sql_ssms_loc_Cpu64_1053: NotInstalled Sco: File 'C:\SQL Install\x64\setup\sql_common_core_msi\x64\sql_common_core.msi' does not e Sco: File 'C:\SQL Install\x64\setup\sql_common_core_msi\x64\sql_common_core.msi' does not e Package ID sql_common_core_Cpu64: NotInstalled Sco: File 'C:\SQL Install\1028_CHT_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1028_CHT_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1028: NotInstalled Sco: File 'C:\SQL Install\1031_DEU_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1031_DEU_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1031: NotInstalled Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1033: NotInstalled Sco: File 'C:\SQL Install\1036_FRA_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1036_FRA_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1036: NotInstalled Sco: File 'C:\SQL Install\1040_ITA_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1040_ITA_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1040: NotInstalled Sco: File 'C:\SQL Install\1041_JPN_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1041_JPN_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1041: NotInstalled Sco: File 'C:\SQL Install\1042_KOR_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1042_KOR_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1042: NotInstalled Sco: File 'C:\SQL Install\1046_PTB_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1046_PTB_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1046: NotInstalled Sco: File 'C:\SQL Install\1049_RUS_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1049_RUS_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1049: NotInstalled Sco: File 'C:\SQL Install\2052_CHS_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\2052_CHS_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_2052: NotInstalled Sco: File 'C:\SQL Install\3082_ESN_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\3082_ESN_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_3082: NotInstalled Sco: File 'C:\SQL Install\1053_SVE_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Sco: File 'C:\SQL Install\1053_SVE_LP\x64\setup\sql_common_core_loc_msi\x64\sql_common_core Package ID sql_common_core_loc_Cpu64_1053: NotInstalled Sco: File 'C:\Users\Administrator\AppData\Local\Temp\2\SQL Server 2012\Setup\1033_ENU_LP\x6 lSupport.msi' does not exist Sco: File 'C:\Users\Administrator\AppData\Local\Temp\2\SQL Server 2012\Setup\1033_ENU_LP\x6 lSupport.msi' does not exist Package ID SqlSupport_Cpu64: NotInstalled Sco: File 'C:\SQL Install\redist\watson\x86\dw20shared.msi' does not exist Sco: File 'C:\SQL Install\redist\watson\x86\dw20shared.msi' does not exist Package ID WatsonX86_Cpu32: NotInstalled Package ID sqlncli_Cpu64: NotInstalled Package ID SqlLocalDB_Cpu64: NotInstalled Package ID SqlLocalDB_CTP3_Cpu64: NotInstalled Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x86\SSDTStub.msi' does not exist Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x86\SSDTStub.msi' does not exist Package ID SSDTStub_Cpu32: NotInstalled Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x86\SSDTDBSvcExternals.msi' does not exist Sco: File 'C:\SQL Install\1033_ENU_LP\x64\setup\x86\SSDTDBSvcExternals.msi' does not exist What does this mean?

    Read the article

  • Need a solution to store images (1 billion, 1000,000,000) which users will upload to a website via php or javascript upload [on hold]

    - by wish_you_all_peace
    I need a solution to store images (1 billion) which users will upload to a website via PHP or Javascript upload (website will have 1 billion page views a month using Linux Debian distros) assuming 20 photos per user maximum (10 thumbnails of size 90px by 90px and 10 large, script resized images of having maximum width 500px or maximum height 500px depending on shape of image, meaning square, rectangle, horizontal, vertical etc). Assume this to be a LEMP-stack (Linux Nginx MySQL PHP) social-media or social-matchmaking type application whose content will be text and images. Since everyone knows storing tons of images (website users uploaded images in this case) are bad inside a single directory or NFS etc, please explain all the details about the architecture and configuration of the entire setup of storage solution, to store 1 billion images on any method you recommend (no third-party cloud storage like S3 etc. It has to be within the private data center using our own hardware and resources.). The solution has to include both the storage solution and organizing the images uploaded by users. How will we organize the users images if a single user will not have more than 20 images (10 thumbs and 10 large of having either width or height 500px)? Please consider that this has to be organized in a structural way so we can fetch a single user's images via PHP/Javascript or API programmatically through some type of user's unique identifier(s).

    Read the article

  • Copy website content from WebsiteA.com/file.html to WebsiteB/file.com every time interval

    - by Jimbo Mombasa
    I want to copy a website from http://stats.pingdom.com/file... to http://mywebsite.com/file every 10 min. Then with purple-include I want to do a transclusion and display it on http://mywebsite.com/page.html So the task is download http://stats.pingdom.com/file to http://mywebsite.com/file I figured out the transclusion part but I do not know how to copy a wabpage from A to B. Are there any script for this or how can I do this?

    Read the article

  • Back up a single table in SQL Server

    - by BuckWoody
    SQL Server doesn’t have an easy way to take a table backup, so I often use the bcp (Bulk Copy Program) to accomplish the same goal. I’ve mentioned this before, and someone told me when they tried it they couldn’t restore the table – ah the dangers of telling people half the information! I should have mentioned that you need to have a “format file” ready if the table does not exist at the destination. In my case I already had the table, in this person’s case they did not. The format file can be used to rebuild that table structure before the data is bcp’d in, and you can read more about it here: http://msdn.microsoft.com/en-us/library/ms191516.aspx There’s another way to back up a table, and that’s to create a Filegroup and place the table there. Then you can take a Filegroup backup to back up a single table. Of course, there are other methods of moving a single table’s data in an out, including SQL Server Integration Services and even the older Data Transformation Services, or simply by using hte SQLCMD or PowerShell utilities to run a query and just save the output to a file. In fact, these days I’m using a PowerShell script to build INSERT statements from that query. That could also easily be modified to create the table structure (or modify one if needed) quite easily. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What You Said: How Do You Sync Your Files Between Your Devices?

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your tricks and techniques for keeping files synced between your different devices. Now we’re back to highlight how you do it. Overwhelmingly, you do it with Dropbox. Despite the proliferation of different platforms there has been little inroads made into any sort of universal syncing. We heard from quite a few different readers and by far the most popular option was to use Dropbox to ensure that you could get the music and documents you wanted whether you were on your desktop, laptop, netbook, iPhone, or Android device. In the same breath however, nearly all of your added on an additional service. The real message, it would seem, is that there simply isn’t a service good enough to meet all of the needs most users have, all of the time. The most common response to our Ask the Readers question was “Dropbox and…”; this pattern is illustrated nicely in the following quotes. Kim writes: Dropbox for all kinds of things. (Would also use Sugarsync, but it doesn’t support Linux.) Lastpass for passwords. Xmarks for bookmarks, although I’m going to try Firefox Sync soon. Evernote for things like shell commands I might want someday. Google Beta for music, once I get it uploaded. I have an Amazon account too, but Google gives you more space. Gmail. Michael finds himself in a similar situation and writes: How to Make and Install an Electric Outlet in a Cabinet or DeskHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • Storing lots of large strings with frequent "appends" and few reads

    - by Thiago Moraes
    In my current project, I need to store a very long ASCII string to each instance of a given object. This string will receive an 2 appends per minute and will not be retrieved so frequently. The worst case scenario is a 5-10MB string. I'll have thousands of instances of my object and I'm worried that storing all those strings in the filesystem would not be optimal, but I can't think of a better solution. Can anyone suggest an alternative? Maybe a key-value store? In this case, which one? Any other thoughts?

    Read the article

  • Adding Column to a SQL Server Table

    - by Dinesh Asanka
    Adding a column to a table is  common task for  DBAs. You can add a column to a table which is a nullable column or which has default values. But are these two operations are similar internally and which method is optimal? Let us start this with an example. I created a database and a table using following script: USE master Go --Drop Database if exists IF EXISTS (SELECT 1 FROM SYS.databases WHERE name = 'AddColumn') DROP DATABASE AddColumn --Create the database CREATE DATABASE AddColumn GO USE AddColumn GO --Drop the table if exists IF EXISTS ( SELECT 1 FROM sys.tables WHERE Name = 'ExistingTable') DROP TABLE ExistingTable GO --Create the table CREATE TABLE ExistingTable (ID BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED, DateTime1 DATETIME DEFAULT GETDATE(), DateTime2 DATETIME DEFAULT GETDATE(), DateTime3 DATETIME DEFAULT GETDATE(), DateTime4 DATETIME DEFAULT GETDATE(), Gendar CHAR(1) DEFAULT 'M', STATUS1 CHAR(1) DEFAULT 'Y' ) GO -- Insert 100,000 records with defaults records INSERT INTO ExistingTable DEFAULT VALUES GO 100000 Before adding a Column Before adding a column let us look at some of the details of the database. DBCC IND (AddColumn,ExistingTable,1) By running the above query, you will see 637 pages for the created table. Adding a Column You can add a column to the table with following statement. ALTER TABLE ExistingTable Add NewColumn INT NULL Above will add a column with a null value for the existing records. Alternatively you could add a column with default values. ALTER TABLE ExistingTable Add NewColumn INT NOT NULL DEFAULT 1 The above statement will add a column with a 1 value to the existing records. In the below table I measured the performance difference between above two statements. Parameter Nullable Column Default Value CPU 31 702 Duration 129 ms 6653 ms Reads 38 116,397 Writes 6 1329 Row Count 0 100000 If you look at the RowCount parameter, you can clearly see the difference. Though column is added in the first case, none of the rows are affected while in the second case all the rows are updated. That is the reason, why it has taken more duration and CPU to add column with Default value. We can verify this by several methods. Number of Pages The number of data pages can be obtained by using DBCC IND command. Though, this an undocumented dbcc command, many experts are ok to use this command in production. However, since there is no official word from Microsoft, use this “at your own risk”. DBCC IND (AddColumn,ExistingTable,1) Before Adding the Columns 637 Adding a Column with NULL 637 Adding a column with DEFAULT value 1270 This clearly shows that pages are physically modified. Please note, a high value indicated in the Adding a column with DEFAULT value  column is also a result of page splits. Continues…

    Read the article

  • How-to populate different select list content per table row

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A frequent requirement posted on the OTN forum is to render cells of a table column using instances of af:selectOneChoices with each af:selectOneChoice instance showing different list values. To implement this use case, the select list of the table column is populated dynamically from a managed bean for each row. The table's current rendered row object is accessible in the managed bean using the #{row} expression, where "row" is the value added to the table's var property. <af:table var="row">   ...   <af:column ...>     <af:selectOneChoice ...>         <f:selectItems value="#{browseBean.items}"/>     </af:selectOneChoice>   </af:column </af:table> The browseBean managed bean referenced in the code snippet above has a setItems and getItems method defined that is accessible from EL using the #{browseBean.items} expression. When the table renders, then the var property variable - the #{row} reference - is filled with the data object displayed in the current rendered table row. The managed bean getItems method returns a List<SelectItem>, which is the model format expected by the f:selectItems tag to populate the af:selectOneChoice list. public void setItems(ArrayList<SelectItem> items) {} //this method is executed for each table row public ArrayList<SelectItem> getItems() {   FacesContext fctx = FacesContext.getCurrentInstance();   ELContext elctx = fctx.getELContext();   ExpressionFactory efactory =          fctx.getApplication().getExpressionFactory();          ValueExpression ve =          efactory.createValueExpression(elctx, "#{row}", Object.class);      Row rw = (Row) ve.getValue(elctx);         //use one of the row attributes to determine which list to query and   //show in the current af:selectOneChoice list  // ...  ArrayList<SelectItem> alsi = new ArrayList<SelectItem>();  for( ... ){      SelectItem item = new SelectItem();        item.setLabel(...);        item.setValue(...);        alsi.add(item);   }   return alsi;} For better performance, the ADF Faces table stamps it data rows. Stamping means that the cell renderer component - af:selectOneChoice in this example - is instantiated once for the column and then repeatedly used to display the cell data for individual table rows. This however means that you cannot refresh a single select one choice component in a table to change its list values. Instead the whole table needs to be refreshed, rerunning the managed bean list query. Be aware that having individual list values per table row is an expensive operation that should be used only on small tables for Business Services with low latency data fetching (e.g. ADF Business Components and EJB) and with server side caching strategies for the queried data (e.g. storing queried list data in a managed bean in session scope).

    Read the article

  • Partition Table and Exadata Hybrid Columnar Compression (EHCC)

    - by Bandari Huang
    Create EHCC table CREATE TABLE ... COMPRESS FOR [QUERY LOW|QUERY HIGH|ARCHIVE LOW|ARCHIVE HIGH]; select owner,table_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ‘ENABLED'; Convert Table/Partition/Subpartition to EHCC Compress Table&Partition&Subpartition to EHCC: ALTER TABLE table_name MOVE COMPRESS FOR [QUERY LOW|QUERY HIGH|ARCHIVE LOW|ARCHIVE HIGH] [PARALLEL <dop>]; ALTER TABLE table_name MOVE PARATITION partition_name COMPRESS FOR [QUERY LOW|QUERY HIGH|ARCHIVE LOW|ARCHIVE HIGH] [PARALLEL <dop>]; ALTER TABLE table_name MOVE SUBPARATITION subpartition_name COMPRESS FOR [QUERY LOW|QUERY HIGH|ARCHIVE LOW|ARCHIVE HIGH] [PARALLEL <dop>]; select owner,table_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ‘ENABLED'; select table_owner,table_name,partition_name,compress_for DBA_TAB_PARTITIONS where compression = ‘ENABLED’; select table_owner,table_name,subpartition_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ‘ENABLED’; Rebuild Unusable Index: select index_name from dba_index where status = 'UNUSABLE'; select index_name,partition_name from dba_ind_partition where status = 'UNUSABLE'; select index_name,subpartition_name from dba_ind_partition where status = 'UNUSABLE'; ALTER INDEX index_name REBUILD [PARALLEL <dop>]; ALTER INDEX index_name REBUILD PARTITION partition_name [PARALLEL <dop>]; ALTER INDEX index_name REBUILD SUBPARTITION subpartition_name [PARALLEL <dop>]; Convert Table/Partition/Subpartition from EHCC to OLTP compression or uncompressed format: Uncompress EHCC Table&Partition&Subpartition: ALTER TABLE table_name MOVE [NOCOMPRESS|COMPRESS for OLTP] [PARALLEL <dop>]; ALTER TABLE table_name MOVE PARTITION partition_name [NOCOMPRESS|COMPRESS for OLTP] [PARALLEL <dop>]; ALTER TABLE table_name MOVE SUBPARTITION subpartition_name [NOCOMPRESS|COMPRESS for OLTP] [PARALLEL <dop>]; select owner,table_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ''; select table_owner,table_name,partition_name,compress_for DBA_TAB_PARTITIONS where compression = ''; select table_owner,table_name,subpartition_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ''; Rebuild Unusable Index: select index_name from dba_index where status = 'UNUSABLE'; select index_name,partition_name from dba_ind_partition where status = 'UNUSABLE'; select index_name,subpartition_name from dba_ind_partition where status = 'UNUSABLE'; ALTER INDEX index_name REBUILD [PARALLEL <dop>]; ALTER INDEX index_name REBUILD PARTITION partition_name [PARALLEL <dop>]; ALTER INDEX index_name REBUILD SUBPARTITION subpartition_name [PARALLEL <dop>];

    Read the article

  • DB Schema for ACL involving 3 subdomains

    - by blacktie24
    Hi, I am trying to design a database schema for a web app which has 3 subdomains: a) internal employees b) clients c) contractors. The users will be able to communicate with each other to some degree, and there may be some resources that overlap between them. Any thoughts about this schema? Really appreciate your time and thoughts on this. Cheers! -- -- Table structure for table locations CREATE TABLE IF NOT EXISTS locations ( id bigint(20) NOT NULL, name varchar(250) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Table structure for table privileges CREATE TABLE IF NOT EXISTS privileges ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, resource_id int(11) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=10 ; -- -- Table structure for table resources CREATE TABLE IF NOT EXISTS resources ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, user_type enum('internal','client','expert') NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -- Table structure for table roles CREATE TABLE IF NOT EXISTS roles ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, type enum('position','department') NOT NULL, parent_id int(11) DEFAULT NULL, user_type enum('internal','client','expert') NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -- Table structure for table role_perms CREATE TABLE IF NOT EXISTS role_perms ( id int(11) NOT NULL AUTO_INCREMENT, role_id int(11) NOT NULL, privilege_id int(11) NOT NULL, mode varchar(250) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -- Table structure for table users CREATE TABLE IF NOT EXISTS users ( id int(10) unsigned NOT NULL AUTO_INCREMENT, email varchar(255) NOT NULL, password varchar(255) NOT NULL, salt varchar(255) NOT NULL, type enum('internal','client','expert') NOT NULL, first_name varchar(255) NOT NULL, last_name varchar(255) NOT NULL, location_id int(11) NOT NULL, phone varchar(255) NOT NULL, status enum('active','inactive') NOT NULL DEFAULT 'active', PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -- Table structure for table user_perms CREATE TABLE IF NOT EXISTS user_perms ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, privilege_id int(11) NOT NULL, mode varchar(250) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -- Table structure for table user_roles CREATE TABLE IF NOT EXISTS user_roles ( id int(11) NOT NULL, user_id int(11) NOT NULL, role_id int(11) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

    Read the article

  • Iterating selected rows in an ADF Faces table

    - by Frank Nimphius
    In OTN Harvest May 2012; http://www.oracle.com/technetwork/developer-tools/adf/learnmore/may2012-otn-harvest-1652358.pdf I wrote about "Common mistake when iterating <af:table> rows". In this entry I showed code to access the row associated with a selected table row from the binding layer to avoid the problem of having to programmatically change the selected table row. As it turns out, my solution only worked fro selected table rows that are in the current iterator query range. So here's a solution that works for all ranges public String onButtonPress() { RowKeySet rks = table.getSelectedRowKeys(); Iterator it = rks.iterator(); while (it.hasNext()) { List selectedRowKeyPath = (List)it.next(); //table is the JSF component reference created using the table's binding //property Row row = ((JUCtrlHierNodeBinding)table.getRowData(selectedRowKeyPath)).getRow(); System.out.println("Print Test: " + row.getAttribute(1)); } return null; }

    Read the article

  • Free file/image hosting website with api [closed]

    - by KoolKabin
    Possible Duplicate: Which image sharing websites supports file uploading dynamically via api I would like to know is there any free images/file hosting website which will allow users to upload image to its website using api? I tried with imageshack.us its fine only problem with it is that i could not make upload the files under my account in imageshack . URL: http://www.outsourcingnepal.com/ImageShack/Uploader/

    Read the article

  • Html table to csv table with image

    - by Joseph
    How to export this html table in to CSV example table: i want this table to be exported to csv .so how to achieve using JQUERY? <html> <body bgcolor="cyan"> <table border="1" align="center" > <br><a href="imp2.csv">Click Here To View In CSV format</a><img src="up.jpg" align="middle" width="39" height="32" /> <tr> <th>ID</th> <th>Name</th> <th>Month</th> <th>Savings</th> </tr> </table> </body> </html> Thanks Joseph

    Read the article

  • Delete temp file during finally vs delete output file during catch

    - by Russell
    This is in Java 6. I've seen more than once that people create temp files, do something, then rename it to the output file. Everything is wrapped in a try-finally block, where the temp file is deleted in finally in case something goes wrong in between. try { //do something with tempFile //do something with tempFile //do something with tempFile tempFile.renameTo(outputFile); } finally { if (tempFile.exists()) tempFile.delete() } I was wondering what are the benefits of doing that instead of doing something to the output file directly and delete it in case of exceptions. try { //do something with outputFile //do something with outputFile //do something with outputFile } catch (Exception e) { if (outputFile.exists()) outputFile.delete(); } My guess is that deleting temp files in finally benefits me when the try block can throw many kinds of exceptions. Is my guess right? What else?

    Read the article

  • SQL join to grab data from same table via intermediate table

    - by Sergio
    Hi Could someone help me with building the following query. I have a table called Sites, and one called Site_H. The two are joined by a foreign key relationship on page_id. So the Sites table contains pages, and the Site_H table shows which pages any given page is a child of by having another foreign key relation back to the site table with a column called ParentOf. So, a page can be have another page as a parent. Other data is stored in the Site_H table such as position etc, hence why it is separated out. I would like a query that returns the details of a page along with the details of its parent page. I just cant quite think about how to structure the SQL. Thanks

    Read the article

  • Keep local MS SQL 2008 DB table and remote SQL Azure DB table in sync

    - by Boomerangertanger
    Hi there, I have a dedicated server which hosts a Windows Service which does a lot of very heavy load stuff and populates a number of SQL Server database tables. However, of all the database tables it populates and works with, I want only one to be synchronised with a remote SQL Azure DB table. This is because this table holds what I called Resolved data, which is the end result of the Windows Service's work. I would like to keep a SQL Azure database table in sync with this database table. As far as I understand, my options are: Move everything onto Azure (but that involves a massive development overhead and risk) Have another Windows Service on the dedicated server which essentially looks at changed records since the last update and then manually update the SQL Azure table

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >