Search Results

Search found 14693 results on 588 pages for 'azure storage tables'.

Page 109/588 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • What to consider if using triggers on tables in a sql-server merge replication

    - by Ice
    Hi, i am driving since some years a sql-server2000 merge-replication over three locations. Triggers do a lot of work in this database. i got no troubles. Now migrating these database to a brand new sql2008, i got some issues about the triggers. They are firing even if the merge-agent does his work. Is there anybody who has some experience with that kind of stuff on sql2008-server? Can anybody confirm that different behaviour to sql2000? Peace Ice

    Read the article

  • Silverlight 4.0: How to increase quota in Isolated File Storage

    - by xscape
    Got this line of code here but its not working. using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication()) { long newSpace = isf.Quota + spaceRequest; try { if(true == isf.IncreaseQuotaTo(newSpace)) { Debug.WriteLine("success"); } else { Debug.WriteLine("unsuccessful"); } } catch (Exception e) { throw; } }

    Read the article

  • Seperation of notification confirmation+storage and handling notification asp.net mvc

    - by bastijn
    After a payment from my web application to a 3rd party the 3rd party sends, next to the direct confirmation message, a notification message. This notification message is stored in my database for future use and I have to send a notification confirmed back. For this purpose I currently use a: return Content("received") Which is standard protocol for the service. Currently, I process the incoming notification by first storing it, than handling it (updating account credits etc in my application) and in the end sending a response. This all works well. But I want to seperate handling the notification and storing+responding to the webservice. The problem is that the "return Content()" is ending my controller method and therefore I cannot simply first send the confirmation message back to the webservice and than call my handle_Notification() method. So the solution would be to replace the return Content() part with something equal which doesn't involve a "return", is this possible, as I do not now the complete URL calling I cannot easily create a simple HTTP POST web request (I tried, might have made an error but did not work). Another solution would be to have some kind of timer or listener which either periodically checks for new notifications in the Database which have to be handled or a listener listening to DB new notifications or something. What is the standard procedure on this, if any?

    Read the article

  • SQL Server 2008 - Editing Tables: Bit columns require 'True' or 'False'

    - by CJM
    Not so much a question as an observation... I'm just upgrading to SQL Server 2008 on my development machine in anticipation of upgrading my live applications. I didn't anticipate any problems since [I think] I generally use standard T-SQL, and probably not too far from ANSI standard SQL. So far so good, but I was really thrown by a very simple change: I was creating a simple, small look-up table to store a list of codes and including a bit column to indicate the current default code. But when I used the new/modified 'Edit Top 200 Rows' option, and entered my 0s and 1s in the the bit column I got an error: 'Invalid value for cell - String was not recognised as a valid boolean' After a bit of head-scratching, I tried True and False - and they worked. So it seems this new Edit feature requires 4 or 5 characters to be typed, rather than the previous 1. Checking further, we can still use '...where bitval = 1' but can now also use '...where bitval = 'true''. But any results returned render these bit columns as 0 or 1 still. It all sounds like half a step backwards. Not the end of the world, but and unnecessary annoyance. Does anybody have any insight on this issue? Or there any other new Gotchas with SQL Server 2008?

    Read the article

  • PHP: optimum configuration storage ?

    - by Jerome WAGNER
    Hello, My application gets configured via a lot of key/values (let's say 30.000 for instance) I want to find the best deployment method for these configurations, knowing that I want to avoid DEFINEs to allow for runtime re-configuration. I have thought of - pre-compiling them into an array via a php file - pre-compiling them into a tmpfs sqlite database - pre-compiling them into a memcached db what are my options for the best random access time to these configuration (memory is not an issue) ? Thanks Jerome

    Read the article

  • database splitting; multiple tables

    - by Ben
    I am coding a classifieds ad web app. What is the optimal way to structure the database for this? Because of the high repeatability, would it be faster (in terms of searching/indexing) to have a separate table in the database for each city? Or would it be okay to just have one table for every city (it would have a lot of rows..). The classifieds table has id, user_id, city_name, category,[description and detail fields].

    Read the article

  • How to differentiate between to similer fields in Linq Join tables

    - by Azhar
    How to differentiate between to select new fields e.g. Description c.Description and lt.Description DataTable lDt = new DataTable(); try { lDt.Columns.Add(new DataColumn("AreaTypeID", typeof(Int32))); lDt.Columns.Add(new DataColumn("CategoryRef", typeof(Int32))); lDt.Columns.Add(new DataColumn("Description", typeof(String))); lDt.Columns.Add(new DataColumn("CatDescription", typeof(String))); EzEagleDBDataContext lDc = new EzEagleDBDataContext(); var lAreaType = (from lt in lDc.tbl_AreaTypes join c in lDc.tbl_AreaCategories on lt.CategoryRef equals c.CategoryID where lt.AreaTypeID== pTypeId select new { lt.AreaTypeID, lt.Description, lt.CategoryRef, c.Description }).ToArray(); for (int j = 0; j< lAreaType.Count; j++) { DataRow dr = lDt.NewRow(); dr["AreaTypeID"] = lAreaType[j].LandmarkTypeID; dr["CategoryRef"] = lAreaType[j].CategoryRef; dr["Description"] = lAreaType[j].Description; dr["CatDescription"] = lAreaType[j].; lDt.Rows.Add(dr); } } catch (Exception ex) { }

    Read the article

  • hibernate fail mapping two tables

    - by sebbalex
    Hi guys, I'd like to understand how it is possible: Until I was working with one table everything worked fine, when I have mapped another table it fails as shown below: Glassfish start: INFO: configuring from resource: /hibernate.cfg.xml INFO: Configuration resource: /hibernate.cfg.xml INFO: Reading mappings from resource : hibernate_centrale.hbm.xml //first table INFO: Mapping class: com.italtel.patchfinder.objects.centrale - centrale INFO: Reading mappings from resource : hibernate_impianti.hbm.xml //second table INFO: Mapping class: com.italtel.patchfinder.objects.Impianto - impianti INFO: Configured SessionFactory: null INFO: schema update complete INFO: Hibernate: select centrale0_.id as id0_, centrale0_.name as name0_, centrale0_.impianto as impianto0_, centrale0_.servizio as servizio0_ from centrale centrale0_ group by centrale0_.name INFO: Hibernate: select centrale0_.id as id0_, centrale0_.name as name0_, centrale0_.impianto as impianto0_, centrale0_.servizio as servizio0_ from centrale centrale0_ where centrale0_.name='ANCONA' order by centrale0_.name asc //Error org.hibernate.hql.ast.QuerySyntaxException: impianti is not mapped [from impianti where impianto='SD' order by modulo asc] at org.hibernate.hql.ast.util.SessionFactoryHelper.requireClassPersister(SessionFactoryHelper.java:181) ..... config: table1 <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="com.italtel.patchfinder.objects.Impianto" table="impianti"> <id column="id" name="id"> <generator class="increment"/> </id> <property name="impianto"/> <property name="modulo"/> </class> </hibernate-mapping> table2 <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="com.italtel.patchfinder.objects.centrale" table="centrale"> <id column="id" name="id"> <generator class="increment"/> </id> <property name="name"/> <property name="impianto"/> <property name="servizio"/> </class> </hibernate-mapping> connection stuff ... <property name="hbm2ddl.auto">update</property> <mapping resource="hibernate_centrale.hbm.xml"/> <mapping resource="hibernate_impianti.hbm.xml"/> </session-factory> </hibernate-configuration> Class: public List loadAll() { Session session = sessionFactory.getCurrentSession(); session.beginTransaction(); return session.createQuery("from centrale group by name").list(); } public List<centrale> loadImplants(String centrale) { Session session = sessionFactory.getCurrentSession(); session.beginTransaction(); return session.createQuery("from centrale where name='" + centrale + "' order by name asc").list(); } public List<Impianto> loadModules(String implant) { Session session = sessionFactory.getCurrentSession(); session.beginTransaction(); return session.createQuery("from impianti where impianto='" + implant + "' order by modulo asc").list(); } } Do you have some advice? Thanks in advance

    Read the article

  • How do I select the max value from multiple tables in one column

    - by Derick
    I would like to get the last date of records modified. Here is a sample simple SELECT: SELECT t01.name, t01.last_upd date1, t02.last_upd date2, t03.last_upd date3, 'maxof123' maxdate FROM s_org_ext t01, s_org_ext_x t02, s_addr_org t03 WHERE t02.par_row_id(+)= t01.row_id and t03.row_id(+)= t01.pr_addr_id and t01.int_org_flg = 'n'; How can I get column maxdate to display the max of the three dates? Note: no UNION or sub SELECT statements ;)

    Read the article

  • Is SHA-1 secure for password storage?

    - by Tgr
    Some people throw around remarks like "SHA-1 is broken" a lot, so I'm trying to understand what exactly that means. Let's assume I have a database of SHA-1 password hashes, and an attacker whith a state of the art SHA-1 breaking algorithm and a botnet with 100,000 machines gets access to it. (Having control over 100k home computers would mean they can do about 10^15 operations per second.) How much time would they need to find out the password of any one user? find out the password of a given user? find out the password of all users? find a way to log in as one of the users? find a way to log in as a specific user? How does that change if the passwords are salted? Does the method of salting (prefix, postfix, both, or something more complicated like xor-ing) matter? Here is my current understanding, after some googling. Please correct in the answers if I misunderstood something. If there is no salt, a rainbow attack will immediately find all passwords (except extremely long ones). If there is a sufficiently long random salt, the most effective way to find out the passwords is a brute force or dictionary attack. Neither collision nor preimage attacks are any help in finding out the actual password, so cryptographic attacks against SHA-1 are no help here. It doesn't even matter much what algorithm is used - one could even use MD5 or MD4 and the passwords would be just as safe (there is a slight difference because computing a SHA-1 hash is slower). To evaluate how safe "just as safe" is, let's assume that a single sha1 run takes 1000 operations and passwords contain uppercase, lowercase and digits (that is, 60 characters). That means the attacker can test 1015*60*60*24 / 1000 ~= 1017 potential password a day. For a brute force attack, that would mean testing all passwords up to 9 characters in 3 hours, up to 10 characters in a week, up to 11 characters in a year. (It takes 60 times as much for every additional character.) A dictionary attack is much, much faster (even an attacker with a single computer could pull it off in hours), but only finds weak passwords. To log in as a user, the attacker does not need to find out the exact password; it is enough to find a string that results in the same hash. This is called a first preimage attack. As far as I could find, there are no preimage attacks against SHA-1. (A bruteforce attack would take 2160 operations, which means our theoretical attacker would need 1030 years to pull it off. Limits of theoretical possibility are around 260 operations, at which the attack would take a few years.) There are preimage attacks against reduced versions of SHA-1 with negligible effect (for the reduced SHA-1 which uses 44 steps instead of 80, attack time is down from 2160 operations to 2157). There are collision attacks against SHA-1 which are well within theoretical possibility (the best I found brings the time down from 280 to 252), but those are useless against password hashes, even without salting. In short, storing passwords with SHA-1 seems perfectly safe. Did I miss something?

    Read the article

  • Using sectioned tables from an sqlite file with an iPhone uitableview

    - by Chris
    I am currently trying to make a sectioned table like this: Section 1: Entry 1 Entry 2 Entry 3 Section 2: Entry 4 Entry 5 Entry 6 Section 3: Entry 7 Entry 8 ... However, using this code: Event *lists = (Event *)[eventList objectAtIndex:indexPath.row]; accessStatement = "select * from DatabaseName"; [self findEvents]; // Code that builds each object's strings from database cell.textLabel.text = lists.name; Section 1: Entry 1 Entry 2 Entry 3 Section 2: Entry 1 Entry 2 Entry 3 Section 3: Entry 1 Entry 2 ... It keeps on repeating itself. How do I make the index so that it continues in each section rather than restarts? Thanks in advance.

    Read the article

  • Rows dropping when I try to join data from two tables

    - by blcArmadillo
    I have a fairly simple query I'm try to write. If I run the following query: SELECT parts.id, parts.type_id FROM parts WHERE parts.type_id=1 OR parts.type_id=2 OR parts.type_id=4 ORDER BY parts.type_id; I get all the rows I expect to be returned. Now when I try to grab the parent_unit from another table with the following query six rows suddenly drop out of the result: SELECT parts.id, parts.type_id, sp.parent_unit FROM parts, serialized_parts sp WHERE (parts.type_id=1 OR parts.type_id=2 OR parts.type_id=4) AND sp.parts_id = parts.id ORDER BY parts.type_id In the past I've never really dealt with ORs in my queries so maybe I'm just doing it wrong. That said I'm guessing it's just a simple mistake. Let me know if you need sample data and I'll post some. Thanks.

    Read the article

  • Auto width on tables

    - by Hulk
    A html table cols and rows are generated dynamically, i.e, for the first instance it could be two rows and there columns. and next time it could be two rows and 10 columns My question is how to adjust the with automatically of the table so that the table always appears 100% in the page adjusting the coulmn size and row size <table> <tr><td></td><td></td><td></td></tr> <tr><td></td><td></td><td></td></tr> </table> <table> <tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr> <tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr> <tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr> </table> Thanks..

    Read the article

  • oracle global temporary tables

    - by mrp
    I created the global temp table. when I execute the code as an individual scripts it works fine. but when I execute it as a single script in TOAD then no record was created. there was just an empty global temp table. eg. CREATE GLOBAL TEMPORARY TABLE TEMP_TRAN ( COL1 NUMBER(9), COL2 VARCHAR2(30), COL3 DATE ) ON COMMIT PRESERVE ROWS / INSERT INTO TEMP_TRAN VALUES(1,'D',sysdate); / INSERT INTO TEMP_TRAN VALUES(2,'I',sysdate); / INSERT INTO TEMP_TRAN VALUES(3,'s',sysdate); / COMMIT; When I run the above code one statement at a time it works fine. But when I execute it as a script it runs fine but there was no records in temp table. can anyone help me on this please?

    Read the article

  • Mysql - Summary Tables

    - by jwzk
    Which method do you suggest and why? Creating a summary table and . . . 1) Updating the table as the action occurs in real time. 2) Running group by queries every 15 minutes to update the summary table. 3) Something else? The data must be near real time, it can't wait an hour, a day, etc.

    Read the article

  • SHAREPOINT: Custom Field type property storage defined for custom field

    - by Eric Rockenbach
    ok here is a great question. I have a set of generic custom fields that are highly configurable from an end user perspective and the configuration is getting overbearing as there are nearly 100 plus items each custom field allows you to perform in the areas of Server/Client Validation, Server/Client Events/Actions, Server/Client Bindings parent/child, display properties for form/control, etc, etc. Right now I'm storing most of these values as "Text" in my field xml for my propertyschema. I'm very familiar with the multi column value, but this is not a complex custom type in sense it's an array. I also considered creating serilzable objects and stuffing them into the text field and then pulling out and de-serilizing them when editing through the field editor or acting on the rules through the custom spfield. So I'm trying to take the following for example <PropertySchema> <Fields> <Field Name="EntityColumnName" Hidden="TRUE" DisplayName="EntityColumnName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityColumnParentPK" Hidden="TRUE" DisplayName="EntityColumnParentPK" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityColumnValueName" Hidden="TRUE" DisplayName="EntityColumnValueName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityListName" Hidden="TRUE" DisplayName="EntityListName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntitySiteUrl" Hidden="TRUE" DisplayName="EntitySiteUrl" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> </Fields> <PropertySchema> And turn it into this... <PropertySchema> <Fields> <Field Name="ServerValidationRules" Hidden="TRUE" DisplayName="ServerValidationRules" Type="ServerValidationRulesType"> <default></default> </Field> </Fields> <PropertySchema> Ideas?????

    Read the article

  • How do I make multi-page landscape tables in LaTeX

    - by Tim
    The title is pretty much the extent of my question. I am trying to insert a large table into a document using the xtabular environment. If I wrap the xtabular environment in a landscape environment, then the bottom of my table gets chopped off. Does anyone have any better suggestions? Thanks \begin{landscape} \singlespace \begin{xtabular}{|c|c|c|c|c|} \hline some & stuff & ... & \\ \end{xtabular} \end{landscape} Tim

    Read the article

  • How do I design the file storage issue?

    - by user102533
    I am working on an application that creates video files and stores them in a folder in the C:\ drive. I speculate that there will be a large number of these files in the future and we would run out of disk space at some point of time (on our VPS). When the time comes that we have to upgrade, we either plan to use one of the Cloud providers to store files or our existing provider can add another disk (say D:\ drive). Either way, I would want to design the app now in a way that in future, moving to different locations would not be an issue and would be transparent to the end user. The code that creates these files supports 2 ways: myObj.SetOutputToDisk(<path to store>); or myObj.SetOutputToMemoryStream(ms); If we go with the Cloud architecture, I assume we might have the following combination: Cloud Files + Existing VPS or Cloud Files + Cloud Windows Server Given the unknowns at this time, how would I go about designing this?

    Read the article

  • Paging enormous tables on DB2

    - by grenade
    We have a view that, without constraints, will return 90 million rows and a reporting application that needs to display paged datasets of that view. We're using nhibernate and recently noticed that its paging mechanism looks like this: select * from (select rownumber() over() as rownum, this_.COL1 as COL1_20_0_, this_.COL2 as COL2_20_0_ FROM SomeSchema.SomeView this_ WHERE this_.COL1 = 'SomeValue') as tempresult where rownum between 10 and 20 The query brings the db server to its knees. I think what's happening is that the nested query is assigning a row number to every row satisfied by the where clause before selecting the subset (rows 10 - 20). Since the nested query will return a lot of rows, the mechanism is not very efficient. I've seen lots of tips and tricks for doing this efficiently on other SQL platforms but I'm struggling to find a DB2 solution. In fact an article on IBM's own site recommends the approach that nhibernate has taken. Is there a better way?

    Read the article

  • Anyone using NoSQL databases for medical record storage?

    - by Brian Bay
    Electronic Medical records are composed of different types of data. Visit information ( date/location/insurance info) seems to lend itself to a RDMS. Other types of medical infomation, such as lab reports, x-rays, photos, and electronic signatures, are document based and would seem to be a good candidate for a 'document-oriented' database, such as MongoDB. Traditionally, binary data would be stored as a BLOB in a RDBMS. A hybrid approach using a traditional RDBMS along with a 'document-oriented' database would seem like good alternative to this. Other alternative would be something like DB2 purexml. The ultimate answer could be that 'it depends', but I really just wanted to get some general feedback/ideas on this. Is anyone using the NoSql approach for medical records?

    Read the article

  • Two asp.net applications to use the same membership tables - specifically user login data

    - by Lk
    HI, I have created a asp.net solution with two applications. They bothe use the same database which is setup with .net membership and roles. Application 1 uses the membership for sauthentication to an administration area - this works fine. Application 2 - has a different applicationID to App1. I want to be able to use the existing user account to manage App2's authentication needs. How is this best achieved? Do I just match App2 appliactionID to App1's or is there another way? Many Thanks, Lk

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >