Search Results

Search found 10719 results on 429 pages for 'temp tables'.

Page 77/429 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • Oracle 10.2.0.1 --> 10.2.0.4 patchset errors on Advanced Queuing tables. Serious or not?

    - by hurfdurf
    We're running Oracle on RHEL 5.4 64-bit. We recently did an upgrade from 10.2.0.1 to 10.2.0.4. Many errors were generated during the upgrade (sample listed below from trace.log) but during application testing afterward everything seemed fine (clean EXP, inserts, updates, deletes, etc.). The errors look like they are all related to Advanced Queuing tables and views. We are not using replication at all, this is a simple single instance db. ORA-24002: QUEUE_TABLE SYS.AQ_EVENT_TABLE does not exist ORA-24032: object AQ$_AQ_SRVNTFN_TABLE_T exists, index could not be created ORA-24032: object AQ$_ALERT_QT_S exists, index could not be created for queue ORA-06512: at "SYS.DBMS_AQADM_SYSCALLS", line 117 ORA-06512: at "SYS.DBMS_AQADM_SYS", line 5116 Is this worth worrying about, and if so, how do I go about cleaning up/recreating the corrupted and/or missing objects?

    Read the article

  • Alternatives for comparing data from different databases

    - by Alex
    I have two huge tables on separate databases. One of them has the information of all the SMS that passed through the company's servers while the other one has the information of the actual billing of those SMS. My job is to compare samples of both of these tables (for example, the records between 1 and 2 pm) to see if there are any differences: SMS that were sent but not charged to the user for whatever reason that may be happening. The columns I will be using to compare are the remitent's phone number and the exact date the SMS was sent. An issue here is that dates usually are the same on both sides, but in many cases differ by 1 or 2 seconds. I have, so far, two alternatives to do this: (PL/SQL) Create two tables where i'm going to temporarily store all the records of that 1hour sample. One for each of the main tables. Then, for each distinct phone number, select the time of every SMS sent from that phone from both my temporary tables and start comparing one by one using cursors. In this case, the procedure would be ran on the server where one of the sources is so the contents of the other one would be looked up using a dblink. (sqlplus + c++) Instead of storing the 1hour samples in new tables, output the query to a text file. I will have two text files, one for each source. Then, open the first file and load all of it's content on a hash_map (key-value) using c++, where the key will be the phone number and the value a list of times of SMS sent from that phone. Finally, open the second file, grab each line (in this format: numberX timeX), look for numberX's entry on the hash_map (wich will be a list of times) and then check if timeX is on that list. If it isn't, save it somewhere to finally store it on a "uncharged" table (this would also be the final step on case 1) My main concern is efficiency. These samples have about 2 million records on each source, so just grabbing one record on one side and looking it up on the other would not be possible. That's the reason I wanted to use hash_maps Which do you think is a better option?

    Read the article

  • Changed username. Now I cannot log on or view my previous files

    - by Lauren
    I want to change my username and followed the instructions from How do I change my username? by creating a temp user with admin privileges. While logged in as temp, I did : sudo usermod -l newname oldname sudo usermod -d /home/newname -m newname Now I cannot log in under newname and /home only lists newname and temp Reading through other sites now, it seems I should have used usermod -d /home/newname -m oldname Based on this, I think I may have deleted the contents of my previous home folder?? I'm sure I'm not the first person to do some stupid while changing username, but any help is greatly appreciated. Thank you!

    Read the article

  • Exploring In-memory OLTP Engine (Hekaton) in SQL Server 2014 CTP1

    The continuing drop in the price of memory has made fast in-memory OLTP increasingly viable. SQL Server 2014 allows you to migrate the most-used tables in an existing database to memory-optimised 'Hekaton' technology, but how you balance between disk tables and in-memory tables for optimum performance requires judgement and experiment. What is this technology, and how can you exploit it? Rob Garrison explains.

    Read the article

  • MySQL: Auto-increment value: 0 is smaller than max used value: xx

    - by Rhodri
    Increasingly I'm getting tables having to be repaired dwith the message returned of: Auto-increment value: 0 is smaller than max used value: xx This has happened on tables with 200 rows and tables with ~3 million rows, but so far the same few tables have had the problem. I'm running MySQL 5.0.22. The repairs are run by a script which checks every minute for the need to repair MySQL tables. I also have an automated backup of the 6 Gigabyte database running very two hours and the repairs always get trigged around the time of the backup. Any ideas?

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • Stairway to T-SQL DML Level 5: The Mathematics of SQL: Part 2

    Joining tables is a crucial concept to understanding data relationships in a relational database. When you are working with your SQL Server data, you will often need to join tables to produce the results your application requires. Having a good understanding of set theory, and the mathematical operators available and how they are used to join tables will make it easier for you to retrieve the data you need from SQL Server.

    Read the article

  • Kinect joint coordinates and XNA animation

    - by Sweta Dwivedi
    I have written a program to record the x,y,z coordinated of the Hand joint and I want to animate my models 2D or 3D according to these coordinates. . .However the output of the x,y,z coordinates are fluctuating from -0 to 1 but not more than that.. So i assume I will need to multiply them back with the screen width and height, however it still doesnt seem to animate according to the original x,y,z points Any transformations I might be missing out? while ((line = r.ReadLine()) != null) { string[] temp = line.Split(','); int x = (int) float.Parse(temp[0]))* maxWidth); int y = (int) float.Parse(temp[1])) * maxHeight); }

    Read the article

  • Converting Celsius Processor Temperature to Fahrenheit

    - by WindowsEscapist
    I'm editing a Conky theme. I would like it to output the processor temperatures in degrees Fahrenheit instead of Celsius. In the ~/.conkyrc file, the command sensors | grep 'Core 0' | cut -c18-19 is used to find the temperature in Celsius for the first processor core. I want to use bc to compute this (give it outputvalue*9/5+32). Problem is, bc wants just absolute values, and I see no way to pass it program output. If I try to use something like temp=$(sensors | grep 'Core 0' | cut -c18-19) & echo 'temp*9/5+32' | bc, it ends up giving me 32 because it registers "temp" as a 0.

    Read the article

  • How to store and update data table on client side (iOS MMO)

    - by farseer2012
    Currently i'm developing an iOS MMO game with cocos2d-x, that game depends on many data tables(excel file) given by the designers. These tables contain data like how much gold/crystal will be cost when upgrade a building(barracks, laboratory etc..). We have about 10 tables, each have about 50 rows of data. My question is how to store those tables on client side and how to update them once they have been modified on server side? My opinion: use Sqlite to store data on client side, the server will parse the excel files and send the data to client with JSON format, then the client parse the JOSN string and save it to Sqlite file. Is there any better method? I find that some game stores csv files on client side, how do they update the files? Could server send a whole file directly to client?

    Read the article

  • Using SQL Server's Output Clause

    When you are inserting, updating, or deleting records from a table, SQL Server keeps track of the records that are changed in two different pseudo tables: INSERTED, and DELETED. These tables are normally used in DML triggers. If you use the OUTPUT clause on an INSERT, UPDATE, DELETE or MERGE statement you can expose the records that go to these pseudo tables to your application and/or T-SQL code.

    Read the article

  • How do I swap two objects in a GC language without triggering GC?

    - by TenFour04
    I have two array lists. that I want to swap each frame. My question is, does the variable 'temp' need to be a member variable to avoid triggering GC, assuming this method is called on dozens of objects each frame? I'm not creating a new object, just a new reference to an object. public void LateUpdate(){ ArrayList<int> temp = previousFrameCollisions; previousFrameCollisions = currentFrameCollisions; currentFrameCollisions = temp; currentFrameCollisions.clear(); } I've been told there's no reason to make a primitive into a member variable just to avoid GC, so my best guess is that this also applies to object references.

    Read the article

  • How do I swap two objects in C# (specifically Mono) without triggering GC?

    - by TenFour04
    I have two array lists. that I want to swap each frame. My question is, does the variable 'temp' need to be a member variable to avoid triggering GC, assuming this method is called on dozens of objects each frame? I'm not creating a new object, just a new reference to an object. public void LateUpdate(){ ArrayList<int> temp = previousFrameCollisions; previousFrameCollisions = currentFrameCollisions; currentFrameCollisions = temp; currentFrameCollisions.clear(); } I've been told there's no reason to make a primitive into a member variable just to avoid GC, so my best guess is that this also applies to object references.

    Read the article

  • How to use filegroups for DB split?

    - by Robin Jain
    In my project I have one DB used for everything. I want it to break into two databases. Static tables having look up values are to be stored in one DB and another DB would be having tables with dynamic data. My problem is that how would I use foreign key constraint in between those two DBs. Can someone help me out and suggest a way to proceed, better if I'm provided an example for the same. I thought of using synonyms for tables and then constraints on synonyms. but later I came to know that synonyms couldn't be used for constraints. I need to maintain relationships among the tables from both DB as the issue is with update, with a new release I just want to update look up tables and for the same I want to split my DB. I want to know how FileGroups could be used for this.

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Facebook Error: "The client token cannot be used for this API" - works on DEV and STAGE but not on LIVE app?

    - by Studio Temp
    I've built a notification sending system that sends notifications to all users of our app, using the app access token. This system is currently running on my localhost. When I configure it with the appid and appsecret for my dev and stage environments, it works fine. But when I put in the appid and appsecret of the LIVE app, I get this error: {"message":"The client token cannot be used for this API", "type":"OAuthException", "code":190} So what's different between dev and live? Dev and Stage are in sandbox mode, Live is not. So I tried disabling sandbox mode on Dev/Stage and they continue to function fine. Dev works fine, Stage works fine, Live gives this error. All other code is the same except for the appid, appsecret, and redirect_uri (changing it to match the domain of each environment). I have checked this post, but unfortunately resetting our appsecret on a site with 1,000,000 users is not something we can do at the moment (too much other functionality relies on it).

    Read the article

  • How to make a GRANT persist for a table that's being dropped and re-created?

    - by Eli Courtwright
    I'm on a fairly new project where we're still modifying the design of our Oracle 11g database tables. As such, we drop and re-create our tables fairly often to make sure that our table creation scripts work as expected whenever we make a change. Our database consists of 2 schemas. One schema has some tables with INSERT triggers which cause the data to sometimes be copied into tables in our second schema. This requires us to log into the database with an admin account such as sysdba and GRANT access to the first schema to the necessary tables on the second schema, e.g. GRANT ALL ON schema_two.SomeTable TO schema_one; Our problem is that every time we make a change to our database design and want to drop and re-create our database tables, the access we GRANT-ed to schema_one went away when the table was dropped. Thus, this creates another annoying step wherein we must log in with an admin account to re-GRANT the access every time one of these tables is dropped and re-created. This isn't a huge deal, but I'd love to eliminate as many steps as possible from our development and testing procedures. Is there any way to GRANT access to a table in such a way that the GRANT-ed permissions survive a table being dropped and then re-created? And if this isn't possible, then is there a better way to go about this?

    Read the article

  • Export Multiple Crystal Reports ASP.NET

    - by AProgrammer
    Hey all, I want to export 2 different reports when I click an Export button. The problem is the routine only fires once and I only get one report to print out. Am I doing something wrong? I think it has something to do with the HTTPResponse, but I'm not sure. Here's my code: Dim badgeSize As Integer = 0 'Drop Down selection Dim badgeData As New DataSet 'Visitor Badge Data Dim badgeEmployeeData As New DataSet 'Employee Badge Data Dim badgeTotals As Integer = 0 'Totals for both badgeSize = ddlBadgeSize.SelectedValue ' Get Visitor Data badgeData = _DatabaseAccess.GetProjectReportData(sessionInfo.myEventID, sessionInfo.EventCreator) ' Get Employee Data badgeEmployeeData = _DatabaseAccess.GetProjectReportEmployeeData(sessionInfo.myEventID, sessionInfo.EventCreator) 'Obtain Totals badgeTotals = badgeData.Tables(0).Rows.Count + badgeEmployeeData.Tables(0).Rows.Count If badgeTotals = 0 Then ShowMessage("There are no badges to print.") Exit Sub End If If badgeSize.Equals(0) Then 'Small If badgeEmployeeData.Tables(0).Rows.Count > 0 Then If badgeEmployeeData.Tables(0).Rows.Count >= 6 Then PrintProjectBadges(badgeEmployeeData, "Employee", badgeSize) Else PrintStandardDymo(badgeEmployeeData, "Employee", 1) End If End If If badgeData.Tables(0).Rows.Count > 0 Then If badgeData.Tables(0).Rows.Count >= 6 Then PrintProjectBadges(badgeData, "Visitor", badgeSize) Else PrintStandardDymo(badgeData, "Visitor", 1) End If End If else 'do somethign else endif And the Report Code: Private Sub PrintProjectBadges(ByVal theData As DataSet, ByVal badgeType As String, ByVal badgeSize As Integer) Dim ourReport As New ReportDocument Dim crConnectionInfo As New ConnectionInfo(SetCrystalConnection) If badgeSize = 0 Then Try If badgeType = "Visitor" Then ourReport.Load(Server.MapPath("SmallProjectBadge.rpt"), OpenReportMethod.OpenReportByDefault) 'LIVE SERVER USE Else ourReport.Load(Server.MapPath("SmallProjectEmployeeBadge.rpt"), OpenReportMethod.OpenReportByDefault) 'LIVE SERVER USE End If Catch ex As Exception Dim TraceList As New ArrayList TraceList.Add("DBLog") DatabaseAccess.WriteToErrorLog("Visitor Registration", "Printing Project Badges", ex.Message, TraceEventType.Information, 1, TraceList) Exit Sub End Try ourReport.SetDataSource(theData.Tables("Project")) Else 'Do somethign else... End If Response.Buffer = True 'Clear the response content and headers Response.ClearContent() Response.ClearHeaders() SetLogon(ourReport, crConnectionInfo) 'Export the Report to Response stream in PDF format and file name Customers ourReport.ExportToHttpResponse(ExportFormatType.PortableDocFormat, Response, True, "Visitor_Badges") Response.End() 'Response.Close() End Sub Any Help would be much appreciated.

    Read the article

  • MS Access to sql server searching

    - by malou17
    How to use this code if we are going to use sql server database becaUSE in this code we used MS Access as the database private void btnSearch_Click(object sender, System.EventArgs e) { String pcode = txtPcode.Text; int ctr = productsDS1.Tables[0].Rows.Count; int x; bool found = false; for (x = 0; x<ctr; x++) { if (productsDS1.Tables[0].Rows[x][0].ToString() == pcode) { found = true; break; } } if (found == true) { txtPcode.Text = productsDS1.Tables[0].Rows[x][0].ToString(); txtDesc.Text = productsDS1.Tables[0].Rows[x][1].ToString(); txtPrice.Text = productsDS1.Tables[0].Rows[x][2].ToString(); } else { MessageBox.Show("Record Not Found"); } private void btnNew_Click(object sender, System.EventArgs e) { int cnt = productsDS1.Tables[0].Rows.Count; string lastrec = productsDS1.Tables[0].Rows[cnt][0].ToString(); int newpcode = int.Parse(lastrec) + 1; txtPcode.Text = newpcode.ToString(); txtDesc.Clear(); txtPrice.Clear(); txtDesc.Focus(); here's the connectionstring Jet OLEDB:Global Partial Bulk Ops=2;Jet OLEDB:Registry Path=;Jet OLEDB:Database Locking Mode=0;Data Source="J:\2009-2010\1st sem\VC#\Sample\WindowsApplication_Products\PointOfSales.mdb"

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >