Search Results

Search found 10101 results on 405 pages for 'temporary tables'.

Page 11/405 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • PHP SQL, SELECT corresponding data from 3 tables at once?

    - by user346325
    I have 3 tables, 'u' 'd' 's' 'u' has userid divid 'd' has divid divname 's' has sname primaryuserid secondaryuserid Now what I'd like to do is display a table with rows of the following format userid, divname, sname Plus figure out a way to decipher whether userid is a primary or secondary for this sname table. I'm able to show userid and divname using a left join, but I don't know how I would add a third table? To make it trickier, there can be more than 1 snames for each userid, up to ~20. Is there a way to display 0-20 snames depending on the userid, seperated with commas?

    Read the article

  • Problems with Vista loading a temporary user profile.

    - by Joe
    I'm having a problem in Vista. My machine has four users, one for each of us in the house. Whenever a user logs in before me, they log out, and then I log in, Vista loads a temporary profile for me. However, if I restart and log in, I get into my profile no problem. Two errors are written to the event log (see below), and I've searched everywhere for solutions. 1: Windows was unable to load the registry. The problem is often caused by insuff. memory or insuff. security rights. DETAIL - The process cannot access the file because it is being used by another process. for C:\users\joe\ntuser.dat I've got plenty of disk space and memory. 2:Windows cannot load the locally stored profile. Possible causes of this error include isufficient security rights or a corrupt local profile. DETAIL - The process cannot access the file because it is being used by another process. Thanks!

    Read the article

  • Windows 2012 RDS Temporary profile for Administrator

    - by Fabio
    I've configured a Windows 2012 RDS Farm with two virtual servers (VMWare - each one on a different ESX server). Both servers have Licensing, Web Access, Gateway, Connection Broker and Session Host roles. High Availability is set up and it works fine. Remote Apps are working and even Windows XP clients have access to the web interface. User profile path is \vmfiles1\UserProfileDisks\App\ and almost everyone has full right access to it. The problem I have is that I would like to be able to access both servers at the same time with the Administrator account (console), but each time I try, the second server that I logon to give me access with a temporary profile. I tried to enable/disable multiple sessions per user and forced Admin logoff with the GPO but nothing changed. Another thing is that the server pool is not saved, so each time I restart the RDS server or I logoff from it, I have to add a server in the server manager. Do you have any idea? Sorry if my english is not perfect.

    Read the article

  • Users are getting a temporary profile

    - by Serhiy
    A bit about current setup: It is windows 2008 R2 AD servers (all of them are 2008R2) and couple locations which set as Sites. Each location has DFS on AD server. Roaming profiles are not used nor configured. Users have their home folder configured as mapped S: drive to DFS shared folder. For example: in profile tab user has: Home Folder - connect - S: to \\domain.com\dc\users\%username% We also have redirected Desktop, Documents and Downloads folders to \\domain.com\dc\users. Everything was fine. Suddenly (today), users in most locations lost their local profile (both XP and W7 desktops) and got temporary profiles. Also, it looks like local profile was created today (from folder properties). I checked events at couple machines and there is not errors related to profiles or logon process. I do not see issues in event logs at servers as well. Basically, I run out of ideas what is wrong and why machines lost their local profiles. PS: Laptop users do not have their folders redirected, but lost profiles as well.

    Read the article

  • How to bulk insert data from ref cursor to a temporary table in PL/SQL

    - by Sambath
    Could anyone tell me how to bulk insert data from a ref cursor to a temporary table in PL/SQL? I have a procedure that one of its parameters stores a result set, this result set will be inserted to a temporary table in another stored procedure. This is my sample code. CREATE OR REPLACE PROCEDURE get_account_list ( type_id in account_type.account_type_id%type, acc_list out sys_refcursor ) is begin open acc_list for select account_id, account_name, balance from account where account_type_id = type_id; end get_account_list; CREATE OR REPLACE PROCEDURE proc1 ( ... ) is accounts sys_refcursor; begin get_account_list(1, accounts); --How to bulk insert data in accounts to a temporary table? end proc1; In SQL Server, I can write as code below CREATE PROCEDURE get_account_list type_id int as select account_id, account_name, balance from account where account_type_id = type_id; CREATE PROCEDURE proc1 ( ... ) as ... insert into #tmp_data(account_id, account_name, balance) exec get_account_list 1 How can I write similar to the code in SQL Server? Thanks.

    Read the article

  • ApplicationPoolIdentity permissions on Temporary Asp.Net files

    - by Anton
    Hi all, at work I am struggling a bit with the following situation: We have a web application that runs on a WIndows Server 2008 64 bits machine. The app's ApplicationPool is running under the ApplicationPoolIdentity and configured for .net 2 and Classic pipeline mode. This works fine up to the moment that XmlSerialization requires creation of Serializer assemblies where MEF is being used to create a collection of knowntypes. To remedy this I was hoping that granting the ApplicationPoolIdentity rights to the ASP.Net Temporary Files directory would be enough, but alas... What I did was the run the following command from a cmd prompt: icacls "c:\windows\microsoft.net\framework64\v2.0.50727\Temporary ASP.NET Files" /grant "IIS AppPool\MyAppPool":(M) Obviously this did not work, otherwise you would not be reading this :) Strange thing is that whenever I grant the Users or even more specific, the Authenticated Users Group those permissions, it works. What's weird as well (in my eyes) is that before I started granting access the ApplicationPoolIdentity was already a member of IIS_IUSRS which does have Modify rights for the temporary asp files directory. And now I'm left wondering why this situation requires Modify rights for the Authenticated Users group. I thought it could be because the apppool account was missing additional rights (googling for this returned some results, so I tried those), but granting the ApplicationPoolIdentity modification rights to the Windows\Temp directory and/or the application directory itself did not fix it. For now we have a workaround, but I hate that I don't know what is exactly going on here, so I was hoping any of you guys could shed some light on this. Thanx in advance!

    Read the article

  • Life Scope of Temporary Variable

    - by Yan Cheng CHEOK
    #include <cstdio> #include <string> void fun(const char* c) { printf("--> %s\n", c); } std::string get() { std::string str = "Hello World"; return str; } int main() { const char *cc = get().c_str(); // cc is not valid at this point. As it is pointing to // temporary string internal buffer, and the temporary string // has already been destroyed at this point. fun(cc); // But I am surprise this call will yield valid result. // It seems that the returned temporary string is valid within // scope (...) // What my understanding is, scope means {...} // Is this valid behavior guarantee by C++ standard? Or it depends // on your compiler vendor implementations? fun(get().c_str()); getchar(); } The output is : --> --> Hello World Hello, may I know the correct behavior is guarantee by C++ standard, or it depends on your compiler vendor implementations? I have tested this under VC2008 and VC6. Works fine for both.

    Read the article

  • How to temporary disable a mirror video driver in windows xp registry

    - by happy clicker
    Because its a lot of text, I will ask first my question and then explain what the base problem is. Perhaps someone can give me a solution to the base problem: Is there is a way to temporary disable mirror video drivers (through registry or so), without uninstalling the corresponding software. I tested changing the enumeration in LocalMachine\Hardware\DeviceMap\Video but after reboot always the old configuration is restored. Explanation of the base problem We are working on a wpf-project for a department of a big company. There we have the problem that WPF renders only in software mode, although the hardware they have, must support hardware rendering (Tier 2). After searching for a solution to the problem, we found out that direct 3d does not work properly and we think thats why WPF can only use SW-rendering. In dxdiag.exe the direct3d-acceleration is enabled, but if we start the test-routine it always fails saying that it has not enough memory (it says memory, not video memory!). I have seen there 3 different types of pc’s (they have some hundreds of each type) and every type shows the exactly same behavior. We tried to update all the drivers, also dx (Version 9.0c) and we searched a lot in the web but could not find a solution. All the pcs have Intel Dual-Core processors or better, one type has an Intel gma 9000 graphics card the other two types have actual ATI and NVidia graphic-cards with 256MB onboard memory. Also the system memory is at least 2GB. Windows is XPSP3. The pc’s are of two different manufacturers. Because we see the exactly same behavior on every computer of this three very different computer-types, we don’t think that this is a driver or a direct x problem. What we’ve found in other newsgroups is, that direct x could be disturbed through mirror-video drivers such as NetMeeting, VNC and other remote desktop-installations. In the registry, we see under LocalMachine\Hardware\DeviceMap\Video a lot of such mirror-entries and we find also the definitions in the CurrentControlSet\Control\Video-Section (However this drivers are not shown in the hardware panel of the os). We can have admin-rights to one of these computers to test if disabling these drivers would help, but we must not change the configuration so that some software does not work after the tests. Therefore I cannot uninstall any software because I have not the mediums, licenses or knowhow to reinstall those apps. The support of this company however will only begin to work, if I can tell them what the real problem is. Thats why we search for a way to disable these mirror-drivers (or a hint to solve the dx problem if we are on a false trace)

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Is there a way to know what the Windows Disk Cleanup utility will delete?

    - by Cam Jackson
    When I run the Disk Cleanup utility that's built into Windows 8, it tells me that it can free up 53GB by deleting 'Temporary Files'. However, a CCleaner analysis on default settings only finds about 300MB worth of space to free up, so I'm wondering what Disk Cleanup has found that CCleaner does not. Note that this question appears to be similar to what I'm asking, but the accepted answer says that 'Temporary Files' refers to %TEMP%. I've already cleared out most of C:\Users\Cam\AppData\Local\Temp, and it now has only 230MB of stuff in it, even with system files showing. So where is this 53GB located? Is there a way to find out what it is? Edit: I should note that this is on a 110GB SSD, so it's almost half the drive. And in fact I'm only using 86GB, so if it's really going to clear out 53GB, that would be more than 60% of the stuff on my C drive. I'm starting to think that Disk Cleanup caches its analysis, and hasn't updated since I started cleaning up the drive earlier today. Although when I run it it says that it's 'Calculating' how much space can be saved, and it takes about 5-10 seconds to do so. Hmmm... Edit2: Here is what my hard drive looks like, according to SpaceMonger (Right click-Open image in new tab, so you can see it properly): You can see why I was starting to think that the 53GB figure is actually wrong. Even if 'Temporary Files' includes my hiberfil and everything in WinSxS (about 13GB total), that would be 26GB, which is only halfway there. Hard to see where there's 53GB of stuff to delete.

    Read the article

  • Writing temporary data from R

    - by Shane
    I want to write some temporary data to disk in an R package, and I want to be sure that it can run on every OS without assuming the user has admin rights. Is there an existing R function that can provide a path to a temporary directory on all major OS's? Or a way to reference a user's home directory? Otherwise, I was thinking of trying this: Sys.getenv("temp") I presume that I can't expect people to have write access to their R locations, otherwise I could reference a path within the package directory: .find.package("package.name").

    Read the article

  • Creation time of Innodb tables

    - by shantanuo
    CRETAE_TIME column of "TABLES" table from INFORMATION_SCHEMA shows the same CREATE_TIME for all my innodb tables. It means all these tables were created between 2010-03-26 06:52:00 and 2010-03-26 06:53:00 while actually they were created a few months ago. Does the CREATE_TABLE field change automatically for Innodb tables?

    Read the article

  • Preference values - static without tables using a model with virtual attributes

    - by Mike
    Im trying to eliminate two tables from my database. The tables are message_sort_options and per_page_options. These tables basically just have 5 records which are options a user can set as their preference in a preferences table. The preferences table has columns like sort_preferences and per_page_preference which both point to a record in the other two tables containing the options. How can i set up the models with virtual attributes and fixed values for the options - eliminating table lookups every time the preferences are looked up?

    Read the article

  • how to update tables' structures keeping current data

    - by Leon
    I have an c# application that uses tables from sqlserver 2008 database (runs on standalone pc with local sqlserver). Initially i install database on this pc with some initial data (there are some tables that application uses and the user doesn't touch). The question is - how can i upgrade this database after user created some new data without harming it (i continue developing and can add some new tables or stored procedures or add some columns to existing tables). Thanks in advance!

    Read the article

  • C++0x rvalue references and temporaries

    - by Doug
    (I asked a variation of this question on comp.std.c++ but didn't get an answer.) Why does the call to f(arg) in this code call the const ref overload of f? void f(const std::string &); //less efficient void f(std::string &&); //more efficient void g(const char * arg) { f(arg); } My intuition says that the f(string &&) overload should be chosen, because arg needs to be converted to a temporary no matter what, and the temporary matches the rvalue reference better than the lvalue reference. This is not what happens in GCC and MSVC. In at least G++ and MSVC, any lvalue does not bind to an rvalue reference argument, even if there is an intermediate temporary created. Indeed, if the const ref overload isn't present, the compilers diagnose an error. However, writing f(arg + 0) or f(std::string(arg)) does choose the rvalue reference overload as you would expect. From my reading of the C++0x standard, it seems like the implicit conversion of a const char * to a string should be considered when considering if f(string &&) is viable, just as when passing a const lvalue ref arguments. Section 13.3 (overload resolution) doesn't differentiate between rvalue refs and const references in too many places. Also, it seems that the rule that prevents lvalues from binding to rvalue references (13.3.3.1.4/3) shouldn't apply if there's an intermediate temporary - after all, it's perfectly safe to move from the temporary. Is this: Me misreading/misunderstand the standard, where the implemented behavior is the intended behavior, and there's some good reason why my example should behave the way it does? A mistake that the compiler vendors have somehow all made? Or a mistake based on common implementation strategies? Or a mistake in e.g. GCC (where this lvalue/rvalue reference binding rule was first implemented), that was copied by other vendors? A defect in the standard, or an unintended consequence, or something that should be clarified?

    Read the article

  • A few tables are still out of sync after running mk-table-sync

    - by smusumeche
    I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • How to use DML on Oracle temporary table without generating much undo log

    - by Sambath
    Hi, Using an Oracle temporary table does not generate much redo log as a normal table. However, the undo log is still generated. Thus, how can I write insert, update, or delete statement on a temporary table but Oracle will not generate undo log or generate as little as possible? Moreover, using /+append/ in the insert statement will generate little undo log. Am I correct? If not, could anyone explain me about using the hint /+append/? INSERT /*+APPEND*/ INTO table1(...) VALUES(...); Thank you.

    Read the article

  • Open zip file without temporary files

    - by Javis Perez
    i've seem this post about extracting a zip without a temporary file via stream and pipes: Open a file from archive without temporary extraction The problem is that i'm using php and have no idea if that is possible. I've search a lot with no luck. My idea is to preview zip files from the dropbox using its API but i dont want to save the files to a local drive, just preview the content. Any idea if that's possible with php? Almost everything i found is about creating the file, not reading it... :-\ I was thinking that i might try with nodejs, but i know mostly nothing about nodejs, do you think it would support it? Any other idea please? thank you.

    Read the article

  • Using Python, How to copy files in 'temporary internet files' folder in Windows

    - by pythBegin
    I am using this code to find files recursively in a folder , with size greater than 50000 bytes. def listall(parent): lis=[] for root, dirs, files in os.walk(parent): for name in files: if os.path.getsize(os.path.join(root,name))>500000: lis.append(os.path.join(root,name)) return lis This is working fine. But when I used this on 'temporary internet files' folder in windows, am getting this error. Traceback (most recent call last): File "<pyshell#4>", line 1, in <module> listall(a) File "<pyshell#2>", line 5, in listall if os.path.getsize(os.path.join(root,name))>500000: File "C:\Python26\lib\genericpath.py", line 49, in getsize return os.stat(filename).st_size WindowsError: [Error 123] The filename, directory name, or volume label syntax is incorrect: 'C:\\Documents and Settings\\khedarnatha\\Local Settings\\Temporary Internet Files\\Content.IE5\\EDS8C2V7\\??????+1[1].jpg' I think this is because windows gives names with special characters in this specific folder... Please help to sort out this issue.

    Read the article

  • best way to create tables with Doctrine?

    - by ajsie
    assume that i start coding an application from scratch, is the best way to create tables when using Doctrine, to manually create tables in mysql and then generate models from the tables, or is it the other way around, that is to create the models in php and then generate tables from models? and if i already have a database, will the models created be optimal? cause i have heard some say that its best to create the database from scratch when using ORM, so that the relations are optimized for OOD. share your thoughts!

    Read the article

  • How To Delete Top 100 Rows From SQL Server Tables

    - by Gopinath
    If you want to delete top 100/n records from an SQL Server table, it is very easy with the following query: DELETE FROM MyTable WHERE PK_Column IN(     SELECT TOP 100 PK_Column     FROM MyTable     ORDER BY creation    ) Why Would You Require To Delete Top 100 Records? I often delete a top n records of a table when number of rows in the are too huge. Lets say if I’ve 1000000000 records in a table, deleting 10000 rows at a time in a loop is faster than trying to delete all the 1000000000  at a time. What ever may be reason, if you ever come across a requirement of deleting a bunch of rows at a time, this query will be helpful to you. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >