Search Results

Search found 10719 results on 429 pages for 'temp tables'.

Page 151/429 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • edit files in unix

    - by Niklas
    Hi, I started to work in unix for couple of weeks ago and can very little about unix. Everytime I edit a file, a temp file~ appears in the filesystem. What does this mean? Can I just remove all these files? What can I do for removing these files automaticaly everytime I close the original file? /Niklas

    Read the article

  • NHibernate ICriteria with a bag

    - by plunk
    Hi, Just a quick question. If I've got 2 tables that are joined in a 3rd table with a many-to-many relationship, is it possible to write an ICriteria with expressions in one of the tables and the join table? Lets say the mapping file looks something like: <bag name ="Bag" table="JoinTable" cascade ="none"> <key column="Data_ID"/> <many-to-many class="Data2" column="Data2_ID"/> </bag> Is it then possible to write an ICriteria like the following? ICriteria crit = session.CreateCriteria(typeof(Data)); crit.Add(Expression.Eq("Name", name)); crit.Add(Expression.Between("Date", startDate, endDate)); crit.Add(Expression.Eq("Bag", data2IDNumber)); When I try this, it tells me I the expected type is IList, whereas the actual type is Bag. Thanks.

    Read the article

  • ORA-06502: PL/SQL: numeric or value error: character string buffer too small with Oracle aggregate f

    - by Tunde
    Good day gurus, I have a script that populates tables on a regular basis that crashed and gave the above error. The strange thing is that it has been running for close to 3 months on the production system with no problems and suddenly crashed last week. There has not been any changes on the tables as far as I know. Has anyone encountered something like this before? I believe it has something to do with the aggregate functions I'm implementing in it; but it worked initially. please; kindly find attached the part of the script I've developed into a procedure that I reckon gives the error. CREATE OR REPLACE PROCEDURE V1 IS --DECLARE v_a VARCHAR2(4000); v_b VARCHAR2(4000); v_c VARCHAR2(4000); v_d VARCHAR2(4000); v_e VARCHAR2(4000); v_f VARCHAR2(4000); v_g VARCHAR2(4000); v_h VARCHAR2(4000); v_i VARCHAR2(4000); v_j VARCHAR2(4000); v_k VARCHAR2(4000); v_l VARCHAR2(4000); v_m VARCHAR2(4000); v_n NUMBER(10); v_o VARCHAR2(4000); -- -- Procedure that populates DEMO table BEGIN -- Delete all from the DEMO table DELETE FROM DEMO; -- Populate fields in DEMO from DEMOV1 INSERT INTO DEMO(ID, D_ID, CTR_ID, C_ID, DT_NAM, TP, BYR, ENY, ONG, SUMM, DTW, REV, LD, MD, STAT, CRD) SELECT ID, D_ID, CTR_ID, C_ID, DT_NAM, TP, TO_NUMBER(TO_CHAR(BYR,'YYYY')), TO_NUMBER(TO_CHAR(NVL(ENY,SYSDATE),'YYYY')), CASE WHEN ENY IS NULL THEN 'Y' ELSE 'N' END, SUMMARY, DTW, REV, LD, MD, '1', SYSDATE FROM DEMOV1; -- LOOP THROUGH DEMO TABLE FOR j IN (SELECT ID, CTR_ID, C_ID FROM DEMO) LOOP Select semic_concat(TXTDESC) INTO v_a From GEOT WHERE ID = j.ID; SELECT COUNT(*) INTO v_n FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID AND PROAC IS NULL; IF (v_n > 0) THEN Select semic_concat(PRO) INTO v_b FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID; ELSE Select semic_concat(PRO || '(' || PROAC || ')' ) INTO v_b FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID; END IF; Select semic_concat(VOCNAME('P02',COD)) INTO v_c From PAR WHERE ID = j.ID; Select semic_concat(VOCNAME('L05',COD)) INTO v_d From INST WHERE ID = j.ID; Select semic_concat(NVL(AUTHOR,'Anon') ||' ('||to_char(PUB,'YYYY')||') '||TITLE||', '||EDT) INTO v_e From REFE WHERE ID = j.ID; Select semic_concat(NAM) INTO v_f FROM EDM E, EDO EO WHERE E.EDMID = EO.EDOID AND ID = j.ID; Select semic_concat(VOCNAME('L08', COD)) INTO v_g FROM AVA WHERE ID = j.ID; SELECT or_concat(NAM) INTO v_o FROM CON WHERE ID = j.ID AND NAM = 'Unknown'; IF (v_o = 'Unknown') THEN Select or_concat(JOBTITLE || ' (' || EMAIL || ')') INTO v_h FROM CON WHERE ID = j.ID; ELSE Select or_concat(NAM || ' (' || EMAIL || ')') INTO v_h FROM CON WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_i FROM PAR WHERE ID = j.ID; IF (v_i = ',') THEN v_i := null; ELSE Select commaencap_concat(COD) INTO v_i FROM PAR WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_j FROM INST WHERE ID = j.ID; IF (v_j = ',') THEN v_j := null; ELSE Select commaencap_concat(COD) INTO v_j FROM INST WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_k FROM SAR WHERE ID = j.ID; IF (v_k = ',') THEN v_k := null; ELSE Select commaencap_concat(COD) INTO v_k FROM SAR WHERE ID = j.ID; END IF; Select commaencap_concat(CONID) INTO v_l FROM CON WHERE ID = j.ID; IF (v_l = ',') THEN v_l := null; ELSE Select commaencap_concat(CONID) INTO v_l FROM CON WHERE ID = j.ID; END IF; Select commaencap_concat(PROID) INTO v_m FROM PRO WHERE ID = j.ID; IF (v_m = ',') THEN v_m := null; ELSE Select commaencap_concat(PROID) INTO v_m FROM PRO WHERE ID = j.ID; END IF; -- UPDATE DEMO TABLE UPDATE DEMO SET GEOC = v_a, PRO = v_b, PAR = v_c, INS = v_d, REFER = v_e, ORGR = v_f, AVAY = v_g, CON = v_h, DTH = v_i, INST = v_j, SA = v_k, CC = v_l, EDPR = v_m, CTR = (SELECT NAM FROM EDM WHERE EDMID = j.CTR_ID), COLL = (SELECT NAM FROM EDM WHERE EDMID = j.C_ID) WHERE ID = j.ID; END LOOP; END V1; / The aggregate functions, commaencap_concat (encapsulates with a comma), or_concat (concats with an or) and semic_concat(concats with a semi-colon). the remaining tables used are all linked to the main table DEMO. I have checked the column sizes and there seems to be no problem. I tried executing the SELECT statements alone and they give the same error without populating the tables. Any clues? Many thanks for your anticipated support.

    Read the article

  • Scrolling with CSS

    - by Jordan Trulen
    I have 4 tables that need to scroll, they are set up as follows: Table1(static) Table2(Horizontal Scrolling) Table3(Vertical Scrolling) Table4(Horizontal and Vertical Scrolling) Table1 Table2 Table3 Table4 The tricky part of this is that Table 3 and 4 need to keep in sync as this is a listing of data broken out into two tables. Table 2 and 4 are in the same situation. Any ideas? No Javascript please as we have a script that works, but it is far too slow to work. Thanks.

    Read the article

  • SQL Server CLR Integration to acheive Encryption/Decryption

    - by Aakash
    I have a requirement to store the data in encrypted form in database tables. I want to do it at database level but the problems I am facing: ( a) Data Type of the field should be Varbinary. ( b) Encryption is not supported by Workgroup edition ( c) Is it possible to encrypt Numeric Fields? I want to access the encrypted data in tables to fetch in views and stored procedure for some processing but due to above problems I am not able to. Here is my Environment: Development Platform - ASP.Net,.Net Framework 3.5,Visual studio 2008 Server Operating System - Windows Server 2008 Database - SQL Server 2008 Work group edition I was also thinking to adopt a different approach to resolve this issue (yet to test it's feasibility). I was just wondering if I could create a CLR function (which could take parameters to encrypt and decrypt data using Cryptography types provided in .Net framework) and use the CLR integration feature of SQL Server and call that function from stored procedure and views. I am not sure if I am thinking in right direction? Any advice on this as well please.

    Read the article

  • MSSQL Efficiently dropping a group of rows with millions and millions of rows

    - by Net Citizen
    I recently asked this question: http://stackoverflow.com/questions/2519183/ms-sql-share-identity-seed-amongst-tables (Many people wondered why) I have the following layout of a table: Table: Stars starId bigint categoryId bigint starname varchar(200) But my problem is that I have millions and millions of rows. So when I want to delete stars from the table Stars it is too intense on MS SQL. I cannot use built in partitioning for 2005+ because I do not have an enterprise license. When I do delete though, I always delete a whole category Id at a time. I thought of doing a design like this: Table: Star_1 starId bigint CategoryId bigint constaint rock=1 starname varchar(200) Table: Star_2 starId bigint CategoryId bigint constaint rock=2 starname varchar(200) In this way I can delete a whole category and hence millions of rows in O(1) by doing a simple drop table. My question is, is it a problem to have thousands of tables in your MS SQL? The drop in O(1) is extremely desirable to me. Maybe there's a completely different solution I'm not thinking of?

    Read the article

  • Join column with different collation issue

    - by George2
    Hello everyone, I am using SQL Server 2005. I have two tables, and they are using different collations. It is not allowed to concatenate columns from tables with different collations, for example the following SQL is not allowed, select table1column1 + table2column2 from ... My question is, why concatenation of two columns from different collations is not allowed from database engine design perspective? I do not know why collation will impact results, the result is just concatenating strings -- should be simple enough and not dependent on collation... thanks in advance, George

    Read the article

  • 2 large databases - worth merging into 1?

    - by Ardman
    I have 2 large databases that were sharded before. I now have removed the sharding and have created a new database with all of the data except for the tables that were originally sharded. Is it worth importing this data into the new database, or keeping them as seperate entities that I can just scan through? We are talking around 60million records in each sharded table, of which there are 2 tables. Also, whilst I have an empty table, should I be adding indexes which weren't thought of when the database was originally constructed and now too large to add them?

    Read the article

  • SQL Query - Count column values separately

    - by user575535
    I have a hard time getting a Query to work right. This is the DDL for my Tables CREATE TABLE Agency ( id SERIAL not null, city VARCHAR(200) not null, PRIMARY KEY(id) ); CREATE TABLE Customer ( id SERIAL not null, fullname VARCHAR(200) not null, status VARCHAR(15) not null CHECK(status IN ('new','regular','gold')), agencyID INTEGER not null REFERENCES Agency(id), PRIMARY KEY(id) ); Sample Data from the Tables AGENCY id|'city' 1 |'London' 2 |'Moscow' 3 |'Beijing' CUSTOMER id|'fullname' |'status' |agencyid 1 |'Michael Smith' |'new' |1 2 |'John Doe' |'regular'|1 3 |'Vlad Atanasov' |'new' |2 4 |'Vasili Karasev'|'regular'|2 5 |'Elena Miskova' |'gold' |2 6 |'Kim Yin Lu' |'new' |3 7 |'Hu Jintao' |'regular'|3 8 |'Wen Jiabao' |'regular'|3 I want to produce the following output, but i need to count separately for ('new','regular','gold') 'city' |new_customers|regular_customers|gold_customers 'Moscow' |1 |1 |1 'Beijing'|1 |2 |0 'London' |1 |1 |0

    Read the article

  • Provide "Paste Link" Functionality in C# Winforms App

    - by Tim
    I would like to add Copy-Paste Link functionality to an application. The application replaces a complex Excel workbook. I would like to be able to copy tables, text, and charts from the application and use Paste Link in MS Word. For the uninitiated: With Excel, when you use Paste Link for the tables, text, charts, etc. the items update in Word when you change them in Excel. Does anyone know for sure if this is/is not possible (is it some proprietary feature of MS Word-Excel)? If not, can anyone point me to some resources that will help (either an app that does this or a tutorial/write-up). Thanks!

    Read the article

  • Join query in doctrine symfony

    - by THOmas
    I have two tables userdetails and blog question The schema is UserDetails: connection: doctrine tableName: user_details columns: id: type: integer(8) fixed: false name: type: string(255) fixed: false BlogQuestion: connection: doctrine tableName: blog_question columns: question_id: type: integer(8) fixed: false unsigned: false primary: true autoincrement: true blog_id: type: integer(8) fixed: false user_id: type: integer(8) fixed: false question_title: type: string(255) I am using one join query for retrieving all the questions and user details from this two tables My join query is $q = Doctrine_Query::create() ->select('*') ->from('BlogQuestion u') ->leftJoin('u.UserDetails p'); $q->execute(); But it is showing this error Unknown relation alias UserDetails Pls anybody help me Thanks in advance

    Read the article

  • Database design: circular references

    - by SlappyTheFish
    I have three database tables: users emails invitations Emails are linked to users by a user_id field. Invitations are also linked to users by a user_id field Emails can be created without an invitation, but every invitation must have an email. I would like to link the emails and invitations tables so it is possible to find the email for a particular invitation. However this creates a circular reference, both an invitation and an email record hold the id for the same user. Is this bad design and if so, how could I improve it? My feeling is that with use of foreign keys and good business logic, it is fine.

    Read the article

  • Consolidate data from many different databases into one with minimum latency

    - by NTDLS
    I have 12 databases totaling roughly 1.0TB, each on a different physical server running SQL 2005 Enterprise - all with the same exact schema. I need to offload this data into a separate single database so that we can use for other purposes (reporting, web services, ect) with a maximum of 1 hour latency. It should also be noted that these servers are all in the same rack, connected by gigabit connections and that the inserts to the databases are minimal (Avg. 2500 records/hour). The current method is very flakey: The data is currently being replicated (SQL Server Transactional Replication) from each of the 12 servers to a database on another server (yes, 12 different employee tables from 12 different servers into a single employee table on a different server). Every table has a primary key and the rows are unique across all tables (there is a FacilityID in each table). What are my options, these has to be a simple way to do this.

    Read the article

  • Views performance in MySQL for denormalization

    - by Gianluca Bargelli
    I am currently writing my truly first PHP Application and i would like to know how to project/design/implement MySQL Views properly; In my particular case User data is spread across several tables (as a consequence of Database Normalization) and i was thinking to use a View to group data into one large table: CREATE VIEW `Users_Merged` ( name, surname, email, phone, role ) AS ( SELECT name, surname, email, phone, 'Customer' FROM `Customer` ) UNION ( SELECT name, surname, email, tel, 'Admin' FROM `Administrator` ) UNION ( SELECT name, surname, email, tel, 'Manager' FROM `manager` ); This way i can use the View's data from the PHP app easily but i don't really know how much this can affect performance. For example: SELECT * from `Users_Merged` WHERE role = 'Admin'; Is the right way to filter view's data or should i filter BEFORE creating the view itself? (I need this to have a list of users and the functionality to filter them by role). EDIT Specifically what i'm trying to obtain is Denormalization of three tables into one. Is my solution correct? See Denormalization on wikipedia

    Read the article

  • Can a PL/pgSQL function contain a dynamic subquery?

    - by morpheous
    I am writing a PL/pgSQL function. The function has input parameters which specify (indirectly), which tables to read filtering information from. The function embeds business logic which allows it to select data from different tables based on the input arguments. The function dynamically builds a subquery which returns filtering data which is then used to run the main query. My questions are: Is it 'legal' to use a dynamic subquery in a PL/pgSQL function. I cant see why not - but this question is related to the next one. AFAIK, PL/pgSQL are cached or precompiled by the query engine. How does having a function that generates dynamic subqueries impact the work of the query engine?

    Read the article

  • MYSQL TRIGGER LOOP

    - by Lee
    Hey all I am going through the pain stacking process of sorting out someone else code. So I am decided to recreate a new database to sit alongside the old one then to use triggers to transfer data between both tables. Now I have an issue with a it looping IE A trigger on each table to update the other. Once one updates it should update the other but as both tables have triggers it just will loop which will cause an issue. Is their a way to stop this from happening ? Hope this makes sense and hope you can advise.

    Read the article

  • log4j.xml configuration with <rollingPolicy> and <triggeringPolicy>

    - by Mike Smith
    I try to configure log4j.xml in such a way that file will be rolled upon file size, and the rolled file's name will be i.e: "C:/temp/test/test_log4j-%d{yyyy-MM-dd-HH_mm_ss}.log" I followed this discussion: http://web.archiveorange.com/archive/v/NUYyjJipzkDOS3reRiMz Finally it worked for me only when I add: try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } to the method: public boolean isTriggeringEvent(Appender appender, LoggingEvent event, String filename, long fileLength) which make it works. The question is if there is a better way to make it work? since this method call many times and slow my program. Here is the code: package com.mypack.rolling; import org.apache.log4j.rolling.RollingPolicy; import org.apache.log4j.rolling.RolloverDescription; import org.apache.log4j.rolling.TimeBasedRollingPolicy; /** * Same as org.apache.log4j.rolling.TimeBasedRollingPolicy but acts only as * RollingPolicy and NOT as TriggeringPolicy. * * This allows us to combine this class with a size-based triggering policy * (decision to roll based on size, name of rolled files based on time) * */ public class CustomTimeBasedRollingPolicy implements RollingPolicy { TimeBasedRollingPolicy timeBasedRollingPolicy = new TimeBasedRollingPolicy(); /** * Set file name pattern. * @param fnp file name pattern. */ public void setFileNamePattern(String fnp) { timeBasedRollingPolicy.setFileNamePattern(fnp); } /* public void setActiveFileName(String fnp) { timeBasedRollingPolicy.setActiveFileName(fnp); }*/ /** * Get file name pattern. * @return file name pattern. */ public String getFileNamePattern() { return timeBasedRollingPolicy.getFileNamePattern(); } public RolloverDescription initialize(String file, boolean append) throws SecurityException { return timeBasedRollingPolicy.initialize(file, append); } public RolloverDescription rollover(String activeFile) throws SecurityException { return timeBasedRollingPolicy.rollover(activeFile); } public void activateOptions() { timeBasedRollingPolicy.activateOptions(); } } package com.mypack.rolling; import org.apache.log4j.helpers.OptionConverter; import org.apache.log4j.Appender; import org.apache.log4j.rolling.TriggeringPolicy; import org.apache.log4j.spi.LoggingEvent; import org.apache.log4j.spi.OptionHandler; /** * Copy of org.apache.log4j.rolling.SizeBasedTriggeringPolicy but able to accept * a human-friendly value for maximumFileSize, eg. "10MB" * * Note that sub-classing SizeBasedTriggeringPolicy is not possible because that * class is final */ public class CustomSizeBasedTriggeringPolicy implements TriggeringPolicy, OptionHandler { /** * Rollover threshold size in bytes. */ private long maximumFileSize = 10 * 1024 * 1024; // let 10 MB the default max size /** * Set the maximum size that the output file is allowed to reach before * being rolled over to backup files. * * <p> * In configuration files, the <b>MaxFileSize</b> option takes an long * integer in the range 0 - 2^63. You can specify the value with the * suffixes "KB", "MB" or "GB" so that the integer is interpreted being * expressed respectively in kilobytes, megabytes or gigabytes. For example, * the value "10KB" will be interpreted as 10240. * * @param value * the maximum size that the output file is allowed to reach */ public void setMaxFileSize(String value) { maximumFileSize = OptionConverter.toFileSize(value, maximumFileSize + 1); } public long getMaximumFileSize() { return maximumFileSize; } public void setMaximumFileSize(long maximumFileSize) { this.maximumFileSize = maximumFileSize; } public void activateOptions() { } public boolean isTriggeringEvent(Appender appender, LoggingEvent event, String filename, long fileLength) { try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } boolean result = (fileLength >= maximumFileSize); return result; } } and the log4j.xml: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"> <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="true"> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c -> %m%n" /> </layout> </appender> <appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender"> <param name="file" value="C:/temp/test/test_log4j.log" /> <rollingPolicy class="com.mypack.rolling.CustomTimeBasedRollingPolicy"> <param name="fileNamePattern" value="C:/temp/test/test_log4j-%d{yyyy-MM-dd-HH_mm_ss}.log" /> </rollingPolicy> <triggeringPolicy class="com.mypack.rolling.CustomSizeBasedTriggeringPolicy"> <param name="MaxFileSize" value="200KB" /> </triggeringPolicy> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c -> %m%n" /> </layout> </appender> <logger name="com.mypack.myrun" additivity="false"> <level value="debug" /> <appender-ref ref="FILE" /> </logger> <root> <priority value="debug" /> <appender-ref ref="console" /> </root> </log4j:configuration>

    Read the article

  • Dynamically Added CheckBox Column is Disabled in GridView

    - by Mark Maslar
    I'm dynamically adding a Boolean column to a DataSet. The DataSet's table is the DataSource for a GridView, which AutoGenerates the columns. Issue: The checkboxes for this dynamically generated column are all disabled. How can I enable them? ds.Tables["Transactions"].Columns.Add("Retry", typeof(System.Boolean)); ds.Tables["Transactions"].Columns["Retry"].ReadOnly = false; In other words, how can I control how GridView generates the CheckBoxes for a Boolean field? (And why does setting ReadOnly to False have no effect?) Thanks!

    Read the article

  • PHP: How to use mysql fulltext search and handle fulltext search result

    - by garcon1986
    Hello, I have tried to use mysql fulltext search in my intranet. I wanted to use it to search in multiple tables, and get the independant results depending on tables in the result page. This is what i did for searching. $query = " SELECT * FROM testtable t1, testtable2 t2, testtable3 t3 WHERE match(t1.firstName, t1.lastName, t1.details) against(' ".$value."') or match(t2.others, t2.information, t2.details) against(' ".$value."') or match(t3.other, t2.info, t2.details) against(' ".$value."') "; $result = mysql_query($query)or die('query error'.mysql_error()); while($row = mysql_fetch_assoc($result)){ echo $row['firstName']; echo $row['lastName']; echo $row['details'].'<br />'; } Do you have any ideas about optimizing the query and format the output of search results?

    Read the article

  • Creating a dataframe in pandas by multiplying two series together

    - by Aoife
    Say I have two series in pandas, series A and series B. How do I create a dataframe in which all of those values are multiplied together, i.e. with series A down the left hand side and series B along the top. Basically the same concept as this, where series A would be the yellow on the left and series B the yellow along the top, and all the values in between would be filled in by multiplication: http://www.google.co.uk/imgres?imgurl=http://www.vaughns-1-pagers.com/computer/multiplication-tables/times-table-12x12.gif&imgrefurl=http://www.vaughns-1-pagers.com/computer/multiplication-tables.htm&h=533&w=720&sz=58&tbnid=9B8R_kpUloA4NM:&tbnh=90&tbnw=122&zoom=1&usg=__meqZT9kIAMJ5b8BenRzF0l-CUqY=&docid=j9BT8tUCNtg--M&sa=X&ei=bkBpUpOWOI2p0AWYnIHwBQ&ved=0CE0Q9QEwBg Thanks!

    Read the article

  • Linux Mplayer Instead of flash any distro/browser

    - by data_jepp
    The thing is that flash sucks for linux on 64-bit, performance vice. Due to this I actually download every clip to a temp folder and play the file while it's being downloaded with mplayer. This works really nice. Flash is not to play a video. Does there exist a plugin for this for any browser? I tried google, but I can't find the right words to search for.

    Read the article

  • MDX performance vs. T-SQL

    - by SubPortal
    I have a database containing tables with more than 600 million records and a set of stored procedures that make complex search operations on the database. The performance of the stored procedures is so slow even with suitable indexes on the tables. The design of the database is a normal relational db design. I want to change the database design to be multidimensional and use the MDX queries instead of the traditional T-SQL queries but the question is: Is the MDX query better than the traditional T-SQL query with regard to performance? and if yes, to what extent will that improve the performance of the queries? Thanks for any help.

    Read the article

  • vb.net | Update DB with OleDB

    - by liron
    i wrote a module of a connection to DB with OleDB and the 'sub UpdateClients' doesn't work, the DB don't update. what's missing or wrong? Module mdlDB Const CONNECTION_STRING As String = _ "provider= Microsoft.Jet.OleDB.4.0;Data Source=DbHalf.mdb;mode= Share Deny None" Dim daClient As New OleDb.OleDbDataAdapter Dim dsClient As New DataSet Dim cmClient As CurrencyManager Public Sub OpenClients(ByVal txtId, ByVal txtName, ByVal BindingContext) Dim Con As New OleDb.OleDbConnection(CONNECTION_STRING) Dim sqlClient As New OleDb.OleDbCommand Con.Open() sqlClient.CommandText = "SELECT*" sqlClient.CommandText += "FROM tblClubClient" sqlClient.Connection = Con daClient.SelectCommand = sqlClient dsClient.Clear() daClient.Fill(dsClient, "CLUB_CLIENT") cmClient = BindingContext(dsClient, "CLUB_CLIENT") cmClient.Position = 0 txtId.DataBindings.Add("text", dsClient, "CLUB_CLIENT.ClntId") txtName.DataBindings.Add("text", dsClient, "CLUB_CLIENT.ClntName") Con.Close() End Sub Public Sub UpdateClients(ByVal txtId, ByVal txtName, ByVal BindingContext) Dim cb As New OleDb.OleDbCommandBuilder(daClient) cmClient = BindingContext(dsClient, "CLUB_CLIENT") dsClient.Tables("CLUB_CLIENT").Rows(cmClient.Position).Item("ClntId") = txtId.Text dsClient.Tables("CLUB_CLIENT").Rows(cmClient.Position).Item("ClntName") = txtName.Text daClient.Update(dsClient, "CLUB_CLIENT") End Sub End Module

    Read the article

  • java: how to compress data into a String and uncompress data from the String

    - by Guillaume
    I want to put some compressed data into a remote repository. To put data on this repository I can only use a method that take the name of the resource and its content as a String. (like data.txt + "hello world"). The repository is moking a filesystem but is not, so I can not use File directly. I want to be able to do the following: client send to server a file 'data.txt' server compress 'data.txt' into data.zip server send to repository content of data.zip repository store data.zip client download from repository data.zip and his able to open it with its favorite zip tool I have tried a lots of compressing example found on the web but each time a send the data to the repository, my resulting zip file is corrupted. Here is a sample class, using the zip*stream and that emulate the repository showcasing my problem. The created zip file is working, but after its 'serialization' it's get corrupted. (the sample class use jakarta commons.io ) Many thanks for your help. package zip; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.util.zip.ZipEntry; import java.util.zip.ZipInputStream; import java.util.zip.ZipOutputStream; import org.apache.commons.io.FileUtils; /** * Date: May 19, 2010 - 6:13:07 PM * * @author Guillaume AME. */ public class ZipMe { public static void addOrUpdate(File zipFile, File ... files) throws IOException { File tempFile = File.createTempFile(zipFile.getName(), null); // delete it, otherwise you cannot rename your existing zip to it. tempFile.delete(); boolean renameOk = zipFile.renameTo(tempFile); if (!renameOk) { throw new RuntimeException("could not rename the file " + zipFile.getAbsolutePath() + " to " + tempFile.getAbsolutePath()); } byte[] buf = new byte[1024]; ZipInputStream zin = new ZipInputStream(new FileInputStream(tempFile)); ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFile)); ZipEntry entry = zin.getNextEntry(); while (entry != null) { String name = entry.getName(); boolean notInFiles = true; for (File f : files) { if (f.getName().equals(name)) { notInFiles = false; break; } } if (notInFiles) { // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(name)); // Transfer bytes from the ZIP file to the output file int len; while ((len = zin.read(buf)) > 0) { out.write(buf, 0, len); } } entry = zin.getNextEntry(); } // Close the streams zin.close(); // Compress the files if (files != null) { for (File file : files) { InputStream in = new FileInputStream(file); // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(file.getName())); // Transfer bytes from the file to the ZIP file int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } // Complete the entry out.closeEntry(); in.close(); } // Complete the ZIP file } tempFile.delete(); out.close(); } public static void main(String[] args) throws IOException { final String zipArchivePath = "c:/temp/archive.zip"; final String tempFilePath = "c:/temp/data.txt"; final String resultZipFile = "c:/temp/resultingArchive.zip"; File zipArchive = new File(zipArchivePath); FileUtils.touch(zipArchive); File tempFile = new File(tempFilePath); FileUtils.writeStringToFile(tempFile, "hello world"); addOrUpdate(zipArchive, tempFile); //archive.zip exists and contains a compressed data.txt that can be read using winrar //now simulate writing of the zip into a in memory cache String archiveText = FileUtils.readFileToString(zipArchive); FileUtils.writeStringToFile(new File(resultZipFile), archiveText); //resultingArchive.zip exists, contains a compressed data.txt, but it can not //be read using winrar: CRC failed in data.txt. The file is corrupt } }

    Read the article

  • Datamodel for a MVC learning project

    - by Dofs
    Hi, I am trying to learn Microsoft MVC 2, and have in that case found a small project I wanted to deploy it on. My idea was to simulate a restaurant where you can order a table. Basics: A user can only reserve a full table, so I don't have the trouble of merging people on different tables. A person can order a table for a certain amount of hours. My question was how I could make the data model the smartest way. I thought of just having a my database like this: Table { Id, TableName } Reservations { Id TableId ReservedFrom ReservedTo UserId } User { UserId UserName ... } By doing it this way I would have to program a lot of the logic in e.g. the business layer, to support which tables are occupied at what time, instead of having the data model handle it. Therefore do you guys have a better way to do this?

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >