Search Results

Search found 1999 results on 80 pages for 'temporary'.

Page 62/80 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Limit the number of rows returned on the server side (forced limit)

    - by evolve
    So we have a piece of software which has a poorly written SQL statement which is causing every row from a table to be returned. There are several million rows in the table so this is causing serious memory issues and crashes on our clients machine. The vendor is in the process of creating a patch for the issue, however it is still a few weeks out. In the mean time we were attempting to figure out a method of limiting the number of results returned on the server side just as a temporary fix. I have no real hope of there being a solution, I've looked around and don't really see any ways of doing this, however I'm hoping someone might have an idea. Thank you in advance. EDIT I forgot an important piece of information, we have no access to the source code so we can not change this on the client side where the SQL statement is formed. There is no real server side component, the client just accesses the database directly. Any solution would basically require a procedure, trigger, or some sort of SQL-Server 2008 setting/command.

    Read the article

  • Compressing a web service response for jQuery

    - by SirDemon
    I'm attempting to gzip a JSON response from an ASMX web service to be consumed on the client-side by jQuery. My web.config already has httpCompression set like so: (I'm using IIS 7) <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files" staticCompressionDisableCpuUsage="90" staticCompressionEnableCpuUsage="60" dynamicCompressionDisableCpuUsage="80" dynamicCompressionEnableCpuUsage="50"> <dynamicTypes> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="text/css" enabled="true" /> <add mimeType="video/x-flv" enabled="true" /> <add mimeType="application/x-shockwave-flash" enabled="true" /> <add mimeType="text/javascript" enabled="true" /> <add mimeType="text/*" enabled="true" /> <add mimeType="application/json; charset=utf-8" enabled="true" /> </dynamicTypes> <staticTypes> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="text/css" enabled="true" /> <add mimeType="video/x-flv" enabled="true" /> <add mimeType="application/x-shockwave-flash" enabled="true" /> <add mimeType="text/javascript" enabled="true" /> <add mimeType="text/*" enabled="true" /> </staticTypes> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> </httpCompression> <urlCompression doDynamicCompression="true" doStaticCompression="true" /> Through fiddler I can see that normal aspx and other compressions work fine. However, the jQuery ajax request and response work as they should, only nothing gets compressed. What am I missing?

    Read the article

  • C++ thread safety - exchange data between worker and controller

    - by peterchen
    I still feel a bit unsafe about the topic and hope you folks can help me - For passing data (configuration or results) between a worker thread polling something and a controlling thread interested in the most recent data, I've ended up using more or less the following pattern repeatedly: Mutex m; tData * stage; // temporary, accessed concurrently // send data, gives up ownership, receives old stage if any tData * Send(tData * newData) { ScopedLock lock(m); swap(newData, stage); return newData; } // receiving thread fetches latest data here tData * Fetch(tData * prev) { ScopedLock lock(m); if (stage != 0) { // ... release prev prev = stage; stage = 0; } return prev; // now current } Note: This is not supposed to be a full producer-consumer queue, only the msot recent data is relevant. Also, I've skimmed ressource management somewhat here. When necessary I'm using two such stages: one to send config changes to the worker, and for sending back results. Now, my questions assuming that ScopedLock implements a full memory barrier: do stage and/or workerData need to be volatile? is volatile necessary for tData members? can I use smart pointers instead of the raw pointers - say boost::shared_ptr? Anything else that can go wrong? I am basically trying to avoid "volatile infection" spreading into tData, and minimize lock contention (a lock free implementation seems possible, too). However, I'm not sure if this is the easiest solution. ScopedLock acts as a full memory barrier. Since all this is more or less platform dependent, let's say Visual C++ x86 or x64, though differences/notes for other platforms are welcome, too. (a prelimenary "thanks but" for recommending libraries such as Intel TBB - I am trying to understand the platform issues here)

    Read the article

  • Use of for_each on map elements

    - by Antonio
    I have a map where I'd like to perform a call on every data type object member function. I yet know how to do this on any sequence but, is it possible to do it on an associative container? The closest answer I could find was this: Boost.Bind to access std::map elements in std::for_each. But I cannot use boost in my project so, is there an STL alternative that I'm missing to boost::bind? If not possible, I thought on creating a temporary sequence for pointers to the data objects and then, call for_each on it, something like this: class MyClass { public: void Method() const; } std::map<int, MyClass> Map; //... std::vector<MyClass*> Vector; std::transform(Map.begin(), Map.end(), std::back_inserter(Vector), std::mem_fun_ref(&std::map<int, MyClass>::value_type::second)); std::for_each(Vector.begin(), Vector.end(), std::mem_fun(&MyClass::Method)); It looks too obfuscated and I don't really like it. Any suggestions?

    Read the article

  • How do I compress a Json result from ASP.NET MVC with IIS 7.5

    - by Gareth Saul
    I'm having difficulty making IIS 7 correctly compress a Json result from ASP.NET MVC. I've enabled static and dynamic compression in IIS. I can verify with Fiddler that normal text/html and similar records are compressed. Viewing the request, the accept-encoding gzip header is present. The response has the mimetype "application/json", but is not compressed. I've identified that the issue appears to relate to the MimeType. When I include mimeType="*/*", I can see that the response is correctly gzipped. How can I get IIS to compress WITHOUT using a wildcard mimeType? I assume that this issue has something to do with the way that ASP.NET MVC generates content type headers. The CPU usage is well below the dynamic throttling threshold. When I examine the trace logs from IIS, I can see that it fails to compress due to not finding a matching mime type. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files" noCompressionForProxies="false"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="application/json" enabled="true" /> </staticTypes> </httpCompression>

    Read the article

  • hg archive to Remote Directory

    - by Brett Daniel
    Is there any way to archive a Mercurial repository to a remote directory over SSH? For example, it would be nice if one could do the following: hg archive ssh://[email protected]/path/to/archive However, that does not appear to work. It instead creates a directory called ssh: in the current directory. I made the following quick-and-dirty script that emulates the desired behavior by creating a temporary ZIP archive, copying it over SSH, and unzipping the destination directory. However, I would like to know if there is a better way. if [[ $# != 1 ]]; then echo "Usage: $0 [user@]hostname:remote_dir" exit fi arg=$1 arg=${arg%/} # remove trailing slash host=${arg%%:*} remote_dir=${arg##*:} # zip named to match lowest directory in $remote_dir zip=${remote_dir##*/}.zip # root of archive will match zip name hg archive -t zip $zip # make $remote_dir if it doesn't exist ssh $host mkdir --parents $remote_dir # copy zip over ssh into destination scp $zip $host:$remote_dir # unzip into containing directory (will prompt for overwrite) ssh $host unzip $remote_dir/$zip -d $remote_dir/.. # clean up zips ssh $host rm $remote_dir/$zip rm $zip Edit: clone-and-push would be ideal, but unfortunately the remote server does not have Mercurial installed.

    Read the article

  • HttpHandler and XML files

    - by Frank
    Hello, I would like to intercept any request made to the server for XML files. I thought that it might be possible with an HttpHandler. It's coded and it works... on localhost only (?!?!). So, why is it working on localhost only? Here is my web.config <?xml version="1.0" encoding="utf-8"?> <configuration> <system.web> <httpHandlers> <add verb="*" path="*.xml" type="FooBar.XmlHandler, FooBar" /> </httpHandlers> </system.web> </configuration> Here is my C# : namespace FooBar { public class XmlHandler : IHttpHandler { public bool IsReusable { get { return false; } } public void ProcessRequest(HttpContext context) { HttpResponse Response = context.Response; Response.Write(xmlString); } } } As you might have seen, I'm writing the xmlString directly in the response, it's only temporary because I'm still wondering how I could give the filename instead (that's the second question ;) ) What is supposed to be written in the response is only the xml filename that will be retrieved by a flash app. Thanks Edit : When calling the page from another computer it looks like it's not getting to the HttpHandler. However, the mapping for IIS have been done correctly.

    Read the article

  • Get "term is undefined” error when trying to assign arrayList to List component dataSource

    - by user1814467
    I'm creating an online game where people log in and then have the list of current players displayed. When the user enters a "room" it dispatches an SFSEvent which includes a Room object with the list of users as User objects in that room. As that event's callback function, I get the list of current users which is an Array, switch the View Stack child index, and then I wrap the user list array in an ArrayList before I assign it to the MXML Spark List component's dataSource. Here's my code: My Actionscript Code Section (PreGame.as): private function onRoomJoin(event:SFSEvent):void { const room:Room = this._sfs.getRoomByName(PREGAME_ROOM); this.selectedChild = waitingRoom; /** I know I should be using event listeners * but this is a temporary fix, otherwise * I keep getting null object errors * do to the li_users list not being * created in time for the dataProvider assignment **/ setTimeout(function ():void { const userList:ArrayList = new ArrayList(room.userList); this.li_users.dataProvider = userList; // This is where the error gets thrown },1000); } My MXML Code: <?xml version="1.0" encoding="utf-8"?> <mx:ViewStack xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" initialize="preGame_initializeHandler(event)" > <fx:Script source="PreGame.as"/> <s:NavigatorContent id="nc_loginScreen"> /** Login Screen Code **/ </s:NavigatorContent> /** Start of Waiting Room code **/ <s:NavigatorContent id="waitingRoom"> <s:Panel id="pn_users" width="400" height="400" title="Users"> /** This is the List in question **/ <s:List id="li_users" width="100%" height="100%"/> </s:Panel> </s:NavigatorContent> </mx:ViewStack> However, I keep getting this error: TypeError: Error #1010: A term is undefined and has no properties Any ideas what I'm doing wrong? The arrayList has data, so I know it's not empty/null.

    Read the article

  • Slow MySQL Query Breaking my back!

    - by Chris n
    so, I have tried everything I can think of, and can't get this query to happen in less than 3 seconds on my local server. I know the problem has to do with the OR referencing both the owner_id and the person_id. if I run one or the other it happens instantly, but together with an or I can't seem to make it work - I looked into rewriting the code, but the way the app was designed it won't be easy. is there a way I can call an equivalent or that won't take so long? here is the sql: SELECT event_types.name as event_type_name,event_types.id as id, count(events.id) as count,sum(events.estimated_duration) as time_sum FROM events,event_types WHERE event_types.id = events.event_type_id AND events.event_type_id != '4' AND ( events.status!='cancelled') AND events.event_type_id != 64 AND ( events.owner_id = 161 OR events.person_id = 161 ) GROUP BY event_types.name ORDER BY event_types.name DESC; Here's the Explain soup, although I'm guessing it's unnecessary cause there is probably a better way to structure that or that is obvious: thanks so much! chris. +----+-------------+-------------+-------+---------------------------------------------------------------------------------------------------------+-------------------------------+---------+-------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------+-------+---------------------------------------------------------------------------------------------------------+-------------------------------+-- | 1 | SIMPLE | event_types | range | PRIMARY | PRIMARY | 4 | NULL | 78 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | events | ref | index_events_on_status,index_events_on_event_type_id,index_events_on_person_id,index_events_on_owner_id | index_events_on_event_type_id | 5 | thenumber_production.event_types.id | 907 | Using where | +----+-------------+-------------+-------+---------------------------------------------------------------------------------------------------------+-------------------------------+---------+-------------------------------------+------+----------------------------------------------+

    Read the article

  • Force creation of query execution plan

    - by Marc
    I have the following situation: .net 3.5 WinForm client app accessing SQL Server 2008 Some queries returning relatively big amount of data are used quite often by a form Users are using local SQL Express and restarting their machines at least daily Other users are working remotely over slow network connections The problem is that after a restart, the first time users open this form the queries are extremely slow and take more or less 15s on a fast machine to execute. Afterwards the same queries take only 3s. Of course this comes from the fact that no data is cached and must be loaded from disk first. My question: Would it be possible to force the loading of the required data in advance into SQL Server cache? Note My first idea was to execute the queries in a background worker when the application starts, so that when the user starts the form the queries will already be cached and execute fast directly. I however don't want to load the result of the queries over to the client as some users are working remotely or have otherwise slow networks. So I thought just executing the queries from a stored procedure and putting the results into temporary tables so that nothing would be returned. Turned out that some of the result sets are using dynamic columns so I couldn't create the corresponding temp tables and thus this isn't a solution. Do you happen to have any other idea?

    Read the article

  • JBoss Developer Studio exploaded SAR/EAR/WAR deployment

    - by dr_hoppa
    I am new to JBoss DS and I could not figur out is how to make a folder from my project deployable, exploded SAR in my case. If I create a new server in the JBoss server view / JBoss AS perspective and select a SAR and make it a deployable then the SAR file can be deployed on the JBoss server. Another approch that I have tryed is to make the *-service.xml file as a deployable element, same procedure as with the SAR file, and to add the Eclipse project as a dependency to the classpath of the server 'Open launch configuration' / 'Classpath' after dooing this I got an error from JBoss telling me that the JBoss classes are not found. The compromise solution that I have found is to create a junction/symlink from the output of the Eclipse project 'bin' into the JBoss 'serve//deploy/xxx.sar' this is a temporary solution but will not scale in the future as multiple services must be created. What I would need: I would need to have the ability to make the exploded folder as 'mark deployable' and to be have the classes/*service.xml files loaded from there and to have hot-deployability for them. Any sugestions would help.

    Read the article

  • SQL Server Bulk insert of CSV file with inconsistent quotes

    - by mattstuehler
    Is it possible to BULK INSERT (SQL Server) a CSV file in which the fields are only OCCASSIONALLY surrounded by quotes? Specifically, quotes only surround those fields that contain a ",". In other words, I have data that looks like this (the first row contain headers): id, company, rep, employees 729216,INGRAM MICRO INC.,"Stuart, Becky",523 729235,"GREAT PLAINS ENERGY, INC.","Nelson, Beena",114 721177,GEORGE WESTON BAKERIES INC,"Hogan, Meg",253 Because the quotes aren't consistent, I can't use '","' as a delimiter, and I don't know how to create a format file that accounts for this. I tried using ',' as a delimter and loading it into a temporary table where every column is a varchar, then using some kludgy processing to strip out the quotes, but that doesn't work either, because the fields that contain ',' are split into multiple columns. Unfortunately, I don't have the ability to manipulate the CSV file beforehand. Is this hopeless? Many thanks in advance for any advice. By the way, i saw this post SQL bulk import from csv, but in that case, EVERY field was consistently wrapped in quotes. So, in that case, he could use ',' as a delimiter, then strip out the quotes afterwards.

    Read the article

  • GridView not DataBinding Automatically after ObjectDataSource Select Method

    - by John Polvora
    I've created a objectadasource that returns a datatable, the gridview is binded to this datasource. the ODS have parameters assigned on the Page_Load event, and the ODS returns the data ok and the gridview shows it fine. the problem is i have a textbox with a filter. first I created a filterexpression in gridview, using the contents of the textbox, worked fine for me. but now I've enabled the paging in gridview. then the filterexpression is not useful now, since the ODS returns only the rows of the pagesize of gridview. I did a new ODS method that select data from parameters page and pagesize according to GridView, and it's OK. now my filter textbox passes his text property to a parameter of the ods select method, then the ods gets the data based on my filter and shows it in the grid. on the Page_Load: ObjectDataSource_Lista.SelectParameters["search"].DefaultValue = filter; ObjectDataSource_Lista.SelectParameters["id"].DefaultValue = ID.ToString(); but when I change the value of the filter, the grid doesn't refresh. on debugging. I see that the ODS Select Method is refreshed ok, but the GridView don't. So I need to call mannually the Databind() method of the grid, to refresh data. the problem is, I have a commandbutton on the grid, and if I manually databind(), the command button stops functioning, generating Page ValidateRequest errors. My question is: how to databind() the grid automatically after the datasource refreshed? ps: on the ODS Selected event, causes a infinite loop and the debug webserver crashes. Temporary solution: Created a Variable private bool wasdatabound; on the event GridView_Databound, set wasdatabound = true; on the Page_PreRenderComplete, if ((GridView1.Visible) && (!databounded)) GridView1.DataBind();

    Read the article

  • Modifying a const through a non-const pointer

    - by jasonline
    I'm a bit confused what happened in the following code: const int e = 2; int* w = ( int* ) &e; // (1) cast to remove const-ness *w = 5; // (2) cout << *w << endl; // (3) outputs 5 cout << e << endl; // (4) outputs 2 cout << "w = " << w << endl; // (5) w points to the address of e cout << "&e = " << &e << endl; In (1), w points to the address of e. In (2), that value was changed to 5. However, when the values of *w and e were displayed, their values are different. But if you print value of w pointer and &e, they have the same value/address. How come e still contained 2, even if it was changed to 5? Were they stored in a separate location? Or a temporary? But how come the value pointed by w is still the address of e?

    Read the article

  • Non-blocking TCP buffer issues.

    - by Poni
    Hi! I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets). Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data. I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP. Take this for example: void some_function() { char cBuff[1024]; // filling cBuff with some data WSASend(...); // sending cBuff, non-blocking mode // filling cBuff with other data WSASend(...); // again, sending cBuff // ..... and so forth! } If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes. Correct? Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them, how can I avoid performance penalty, etc'? And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.

    Read the article

  • shell scripting: search/replace & check file exist

    - by johndashen
    I have a perl script (or any executable) E which will take a file foo.xml and write a file foo.txt. I use a Beowulf cluster to run E for a large number of XML files, but I'd like to write a simple job server script in shell (bash) which doesn't overwrite existing txt files. I'm currently doing something like #!/bin/sh PATTERN="[A-Z]*0[1-2][a-j]"; # this matches foo in all cases todo=`ls *.xml | grep $PATTERN -o`; isdone=`ls *.txt | grep $PATTERN -o`; whatsleft=todo - isdone; # what's the unix magic? #tack on the .xml prefix with sed or something #and then call the job server; jobserve E "$whatsleft"; and then I don't know how to get the difference between $todo and $isdone. I'd prefer using sort/uniq to something like a for loop with grep inside, but I'm not sure how to do it (pipes? temporary files?) As a bonus question, is there a way to do lookahead search in bash grep? To clarify: so the simplest way to do what i'm asking is (in pseudocode) for i in `/bin/ls *.xml` do replace xml suffix with txt if [that file exists] add to whatsleft list end done

    Read the article

  • "RewriteBase: argument is not a valid URL" error

    - by user305434
    hi, I'm trying to configure .htaccess of my website. http://213.175.210.49/~incisozl/ is the temporary url to the root(~/public_html/). when I try to rewrite the url at .htaccess i get an /home/incisozl/public_html/.htaccess: RewriteBase: argument is not a valid URL, referer: ht tp://213.175.210.49/~incisozl/inci-sozluk/somestring error. my rewrite rule is; RewriteEngine On RewriteBase / RewriteRule ^/?$ /index.php [L] RewriteRule ^inci-sozluk/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=word&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk/([^.\?/]+)?$ /seo.php?process=word&q=$1 [L] RewriteRule ^inci-sozluk/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=word&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluktest/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=wordtest&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-bugun/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=wordbg&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-bugun/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=wordbg&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-dun/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=worddn&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-dun/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=worddn&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-ters/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=wordts&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-ters/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=wordts&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-cvpters/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=cvpwordts&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-cvpters/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=cvpwordts&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-ileti/([0-9]+)/?$ /seo.php?process=eid&eid=$1 [L] RewriteRule ^inci-sozluk-ileticvp/([0-9]+)/?$ /seo.php?process=cvpeid&eid=$1 [L] btw. it works fine when i use it with www.incisozluk.org pointed domain

    Read the article

  • Missing line number in stack trace eventhough the PDB files are included

    - by Farzad
    This is running me nuts. I have this web service implemented w/ C# using VS 2008. I publish it on IIS. I have modified the release build so the pdb files are copied along with the dlls into the target directory on inetpub. Also web.config file has debug=true. Then I call a web service that throws an exception. The stack trace does not contain the line numbers. I have no idea what I am missing here, any ideas? Additional Info: If I run the web app using VS built-in web server, it works and I get line numbers in stack trace. But if I copy the same files (pdb and dll) that the VS built-in web server is using to IIS, still the line numbers are missing in stack trace. It seems that there is something related to the IIS that ignores the pdb files! Update When I publish to IIS, all the pdb files are published under the bin directory and everything looks fine. But when I go to "C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files" under the specific directory related to my project, I can see that the assembly (.dll) files are all there, but there is no pdb files. But this does not happen if I run the project using VS built-in web server. So if I copy the pdb files manually to the temp folder, I can see the line numbers. Any idea why the pdb files are not copied to the temp folder? BTW, when I attach to the worker process I can see that it says Symbols loaded!

    Read the article

  • Create CAB file for ActiveX installation for IE

    - by vikasde
    I created a cab file that contains my activex using CABARC.exe. I also created an .inf file. My inf file looks like this: [version] signature="$CHICAGO$" AdvancedINF=2.0 [Add.Code] MySetup.exe=MySetup.exe [MySetup.exe] file-win32-x86=thiscab clsid={49892510-B520-4b35-8ADF-57084DD2F717} My html looks like this: <object name="secondobj" style='display:none' id='TestActivex' classid='CLSID:49892510-B520-4b35-8ADF-57084DD2F717' codebase='http://myurl/MySetup.cab#version=1,0,0,0'></object> I created the CABARC using the following commmand: C:\tools\Cab\BIN>CABARC.EXE N MySetup.cab MySetup.msi setup.inf I also added http://myurl to the trusted sites. Now the first time I opened the html page in IE, I saw a yellow bar, which I accepted. However it never installed the activex control. I dont see the installation in my program files nor can I see anything in the event logs or in the temporary download folder or in the "manage add-ons". Now everytime I open the webpage in IE, I do not see the yellow bar anymore. Can anybody help me out here please?

    Read the article

  • Getting DirectoryNotFoundException when trying to Connect to Device with CoreCon API

    - by ageektrapped
    I'm trying to use the CoreCon API in Visual Studio 2008 to programmatically launch device emulators. When I call device.Connect(), I inexplicably get a DirectoryNotFoundException. I get it if I try it in PowerShell or in C# Console Application. Here's the code I'm using: static void Main(string[] args) { DatastoreManager dm = new DatastoreManager(1033); Collection<Platform> platforms = dm.GetPlatforms(); foreach (var p in platforms) { Console.WriteLine("{0} {1}", p.Name, p.Id); } Platform platform = platforms[3]; Console.WriteLine("Selected {0}", platform.Name); Device device = platform.GetDevices()[0]; device.Connect(); Console.WriteLine("Device Connected"); SystemInfo info = device.GetSystemInfo(); Console.WriteLine("System OS Version:{0}.{1}.{2}", info.OSMajor, info.OSMinor, info.OSBuildNo); Console.ReadLine(); } My question: Does anyone know why I'm getting this error? I'm running this on WinXP 32-bit, plain jane Visual Studio 2008 Pro. I imagine it's some config issue since I can't do it from a Console app or PowerShell. Here's the stack trace as requested: System.IO.DirectoryNotFoundException was unhandled Message="The system cannot find the path specified.\r\n" Source="Device Connection Manager" StackTrace: at Microsoft.VisualStudio.DeviceConnectivity.Interop.ConManServerClass.ConnectDevice() at Microsoft.SmartDevice.Connectivity.Device.Connect() at ConsoleApplication1.Program.Main(String[] args) in C:\Documents and Settings\Thomas\Local Settings\Application Data\Temporary Projects\ConsoleApplication1\Program.cs:line 23 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException:

    Read the article

  • Doing large updates against indexed view

    - by user217136
    We have an indexed view that runs across three large tables. Two of these tables (A & B) are constantly getting updated with user transactions and the other table (C) contains data product info that is needs to be updated once a week. This product table contains over 6 million records. We need this view across these three tables for our core business process and unfortunately we cannot change this aspect. We even had a sql server MVP come in to help test under load to make sure we have the most efficient configuration. There is one column in the product table that gets utilized in the view and has to be updated each week. The problem we are now encountering is that as volume is increasing on our transactions against tables A & B, the update to Table C is causing deadlocks. I have tried several different methods to no avail: 1) I was hoping that we could change the view so that table C could be a dirty read "WITH (NOLOCK)" but apparently that functionality is not available with indexes views. 2) I thought about updating a new column in Table C and then just renaming it when the process is done but you cannot do that due to the dependency in the view. 3) I also entertained the idea of writing this value to a temporary product table, and then running an ALTER statement against the view to have it point to my new table. however when i did that the indexes on my view were dropped and it took quite a bit of time to recreate them. 4) we tried to do the weekly update in small chunks (as small as 100 records at a time) but we still run into dead locks. questions: a) we are using sql server 2005. Does sql server 2008 have a new functionality with their indexed views that would help us? Is there now a way to do dirty reads w/ an indexed view? b) a better approach to altering an existing view to point to a new table? thanks!

    Read the article

  • Django's post_save signal behaves weirdly with models using multi-table inheritance

    - by hekevintran
    Django's post_save signal behaves weirdly with models using multi-table inheritance I am noticing an odd behavior in the way Django's post_save signal works when using a model that has multi-table inheritance. I have these two models: class Animal(models.Model): category = models.CharField(max_length=20) class Dog(Animal): color = models.CharField(max_length=10) I have a post save callback called echo_category: def echo_category(sender, **kwargs): print "category: '%s'" % kwargs['instance'].category post_save.connect(echo_category, sender=Dog) I have this fixture: [ { "pk": 1, "model": "animal.animal", "fields": { "category": "omnivore" } }, { "pk": 1, "model": "animal.dog", "fields": { "color": "brown" } } ] In every part of the program except for in the post_save callback the following is true: from animal.models import Dog Dog.objects.get(pk=1).category == u'omnivore' # True When I run syncdb and the fixture is installed, the echo_category function is run. The output from syncdb is: $ python manage.py syncdb --noinput Installing json fixture 'initial_data' from '~/my_proj/animal/fixtures'. category: '' Installed 2 object(s) from 1 fixture(s) The weird thing here is that the dog object's category attribute is an empty string. Why is it not 'omnivore' like it is everywhere else? As a temporary (hopefully) workaround I reload the object from the database in the post_save callback: def echo_category(sender, **kwargs): instance = kwargs['instance'] instance = sender.objects.get(pk=instance.pk) print "category: '%s'" % instance.category post_save.connect(echo_category, sender=Dog) This works but it is not something I like because I must remember to do it when the model inherits from another model and it must hit the database again. The other weird thing is that I must do instance.pk to get the primary key. The normal 'id' attribute does not work (I cannot use instance.id). I do not know why this is. Maybe this is related to the reason why the category attribute is not doing the right thing?

    Read the article

  • pattern matching in .Net consistent with IsolatedStorageFile.GetFileNames() pattern matching

    - by Mick N
    Is the pattern matching logic used by this API exposed for reuse somewhere in the .Net Framework? Something of the form FilePatternMatch( string searchPattern, stringfileNameToTest ) is what I'm looking for. I'm implementing a temporary workaround for WP7 not filtering the results for this overload and I'd like the solution to both provide a consistent experience and avoid reinventing this functionality if it is exposed. If the behaviour is not exposed for reuse, a regular expression solution (like glob pattern matching in .NET) will suffice and would save me spending the time to test the fine details of what the behaviour should be. Perhaps one of the answers posted in the thread linked above is correct. Since I haven't confirmed the exact behaviour as yet, I wasn't able to determine this at a glance. Feel free to point me to one of those answers if you know it is behaviouraly an exact match to the API referenced in the question title. I could assume the pattern matching is consistent with how DOS handled * and ? in 8.3 file names (I'm familiar with behavioural nuances of that implementation), but it's reasonable to assume Microsoft has evolved pattern matching behaviour for file names in the decade+ since so I thought I would check before proceeding on that assumption.

    Read the article

  • svn import, dont modify revision OR modify the list of files in a transaction

    - by Vaughan Durno
    Hi Ive gained so much knowledge/insight from this site in the past few years, now im actually hoping to get some enlightenment. The scenario is as follows: You have the general structure of the repo (trunk,branches,tags) but added to the layout you have another directory called 'db_revs'. Now in the pre-commit, you take a dump of a specific database (the specifics are irrelevant) into a temporary file, say /tmp/REV.sql (REV being the HEAD revision number of the repo, or the transaction). K all is well and you can just import that temp file into the repo at /db_revs/REV.sql Now obviously that import, even tho its happening during a commit, increments the revision of the repo. So when u do a commit at some point to say 'test.php' in the trunk and it completes at say revision 159, then the pre-commit runs as it should and the DB dump gets imported but then u r sitting with a tree in the repo-browser where 'trunk' is at revision 159, and 'db_revs', which has the imported dump, is at 158 (Ive made it so that the filename matches the revision ie: 159.sql but that file is then at revision 158). NB If you're doing an import in a pre-commit, you need to add some logic to not perform the import, say by checking first for the existence of the temp file, otherwise it will cause, um, a stack overflow and your PC will quickly crawl to a stand still So I wanted to know if it was possible to make an import to not commit its changes. I realise I might be barking up the wrong tree to begin with so I have another idea of doing this so that brings me to the 2nd part of my question, would it be possible to modify the list of files that the transaction is about to commit to the repo. I know this can be done to a WC but that wont help as a WC is a checked out copy of say the trunk so im not sure how u would add a file to the 'db_revs' folder which is above trunk? Any help is greatly appreciated Cheers Vaughan

    Read the article

  • Self-referencing tables in Linq2Sql

    - by J-Man
    Hi, I've seen a lot of questions on self-referencing tables in Linq2Sql and how to eagerly load all child records for a particular root object. I've implemented a temporary solution by accessing all underlying properties, but you can see that this doesn't do the performance any good. The thing is though, that all records are correlated with each-other using a correlation GUID. Example below: RootElement - Id: 1 - ParentId: null - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement1 - Id: 2 - ParentId: 1 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement2 - Id: 3 - ParentId: 2 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement1 - Id: 4 - ParentId: 2 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD In my case, I do have access to the correlationId, so I can retrieve all of my records by performing the following query: from element in db.Elements where element.CorrelationId == '4D68E512-4B55-44f4-BA5A-174B630A03DD' select element; But, of course, I want these elements associated with each other by executing this query: from element in db.Elements where element.CorrelationId == '4D68E512-4B55-44f4-BA5A-174B630A03DD' && element.ParentId == null select element; My question is: is it possible to combine the results the first query as some sort of 'caching mechanism' for the query where I get the root element? Thanks for the input. J.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >