Search Results

Search found 21392 results on 856 pages for 'order of operations'.

Page 583/856 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • script to dynamically fix ophaned users after db restore

    - by JJgates
    After performing a database restore, I want to run a dynamic script to fix ophaned users. My script below loops through all users that are displayed after executing sp_change_users_login 'report' and apply "alter user [username] with login = [username]" to fix SID conflicts verses static go statements. Although, I'm getting an "incorrect syntax error on line 15." can't figure out why... DECLARE @Username varchar(100), @cmd varchar(100) DECLARE userLogin_cursor CURSOR FAST_FORWARD FOR SELECT UserName = name FROM sysusers WHERE issqluser = 1 and (sid IS NOT NULL AND sid <> 0×0) AND suser_sname(sid) IS NULL ORDER BY name FOR READ ONLY OPEN userLogin_cursor FETCH NEXT FROM userLogin_cursor INTO @Username WHILE @@fetch_status = 0 BEGIN SET @cmd = ‘ALTER USER ‘+@username+‘ WITH LOGIN ‘+@username EXECUTE(@cmd) FETCH NEXT FROM userLogin_cursor INTO @Username END CLOSE userLogin_cursor DEALLOCATE userLogin_cursor

    Read the article

  • MySQL: Efficient Blobbing?

    - by feklee
    I'm dealing with blobs of up to - I estimate - about 100 kilo bytes in size. The data is compressed already. Storage engine: InnoDB on MySQL 5.1 Frontend: PHP (Symfony with Propel ORM) Some questions: I've read somewhere that it's not good to update blobs, because it leads to reallocation, fragmentation, and thus bad performance. Is that true? Any reference on this? Initially the blobs get constructed by appending data chunks. Each chunk is up to 16 kilo bytes in size. Is it more efficient to use a separate chunk table instead, for example with fields as below? parent_id, position, chunk Then, to get the entire blob, one would do something like: SELECT GROUP_CONCAT(chunk ORDER BY position) FROM chunks WHERE parent_id = 187 The result would be used in a PHP script. Is there any difference between the types of blobs, aside from the size needed for meta data, which should be negligible.

    Read the article

  • Qt on Mac: where to find "configure"

    - by Gil
    hi, I am very new to Mac. I downloaded QT SDK Mac Open source (http://get.qt.nokia.com/qtsdk/qt-sdk-mac-opensource-2010.02.dmg) and installed the Package. I can run qmake, build samples and run demos, but I cannot run configure (in order to build the Qt libraries statically). It says: -bash: No such file or directory. Documentation says I should run this in the "Qt root folder", but what is this folder in Mac? I looked for it in /usr/bin, /usr/local/Qt4.6, /Developer/Tools/Qt. Anyway, what is "configure" on Mac. is it an executable or a script? Thanks a lot

    Read the article

  • Wordpress database query running slow - one of the columns doesn't exist!

    - by Pavel
    Hi there. I'm having some problems with the query that wordpress runs. That's the one: SELECT DISTINCT ID,post_title,post_date,post_content,MATCH(post_title,post_content) AGAINST ('S') AS score FROM wp_posts WHERE MATCH (post_title,post_content) AGAINST ('S') AND post_date <= 'S' AND post_status = 'S' AND id != N AND post_type = 'S' ORDER BY score DESC When I'm running this query in phpmyadmin it says that N column doesn't exist so clause "AND id != N" si not making any sense. I ran the query again without this clause and db behaved like fully optimized one. Please can someone give me a hint on that? My questions are: What this clause is used for? What wordpress is trying to find by running this and Can I modify core wordpress files to get rid of this clause? Any response or help greatly appreciated!!

    Read the article

  • python + auto ssh proccess to get date info

    - by david
    I need to perform on my linux 5.3 ssh [Linux machine red hat 5.3] date in order to get the date results , during ssh need to answer on the following question (yes/no)? -- yes password: -- diana_123 and then I will get the date results please advice how to do this automated process with python? ( on my Linux I have Python 2.2.3 ) python script should get the IP address number , and perform automatically ssh to 103.116.140.151 and return the date results as --> Fri Nov 18 11:25:18 IST 2011 example of manual proccess: # ssh 103.116.140.151 date The authenticity of host '103.116.140.151 (103.116.140.151)' can't be established. RSA key fingerprint is ad:7e:df:9b:53:86:9f:98:17:70:2f:58:c2:5b:e2:e7. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '103.116.140.151' (RSA) to the list of known hosts. [email protected]'s password: Fri Nov 18 11:25:18 IST 2011

    Read the article

  • Intime and OutTime for the Modified date

    - by Jash
    Question is already posted on June 4, but still am not get the Proper answer Again Table Structure: T_Person – Table 1 CARDNO 168 471 488 247 519 518 331 240 518 386 441 331 T_Cardevent – Table 2 CARDEVENTDATE CARDEVENTTIME 20090225 163932 20090225 164630 20090225 165027 20090225 165137 20090225 165147 20090225 165715 20090225 165749 20090303 162059 20090303 162723 20090303 155029 20090303 155707 20090303 162824 Query SELECT CARDNO, CARDEVENTDATE, (1000000 * CAST (CARDEVENTDATE AS BIGINT) + CAST (CARDEVENTTIME AS BIGINT) - 30001) / 1000000 AS CardEvenDateAdjusted, CARDEVENTTIME FROM T_CARDEVENT WHERE (CARDEVENTDATE > 20090601) GROUP BY CARDNO, CARDEVENTDATE, CARDEVENTTIME, (1000000 * CAST(CARDEVENTDATE AS BIGINT) + CAST(CARDEVENTTIME AS BIGINT) - 30001) / 1000000 ORDER BY CARDNO, CARDEVENDATEADJUSTED From this above query date is displaying correctly according to that time 03:00:01 to 03:00:00 How can I get min (time) and Max (time) for the adjusted date? I need the sql query for the above condition. Help me? Urgent Please

    Read the article

  • What is Test Driven Development? Does it require to have initial designs?

    - by Nirajan Singh
    Hello Everybody, I am very new to TDD, not yet started using it. But i know that we have to write test first and then actual code to pass the test and refactor it till good design. My concern over TDD is that where does it fit in our SDLC. Suppose i get a requirement of making order processing system. Now, without having any model & design of this system, how can i start writing test. Shouldn't we require to define the entities & its attribute to proceed. If not, is it possible to develop big system without any design. I am really very confused over it. Can anyone help me to start TDD. Thanks in advance.

    Read the article

  • Can in-memory SQLite databases scale with concurrency?

    - by Kent Boogaart
    In order to prevent a SQLite in-memory database from being cleaned up, one must use the same connection to access the database. However, using the same connection causes SQLite to synchronize access to the database. Thus, if I have many threads performing reads against an in-memory database, it is slower on a multi-core machine than the exact same code running against a file-backed database. Is there any way to get the best of both worlds? That is, an in-memory database that permits multiple, concurrent calls to the database?

    Read the article

  • Find the set of largest contiguous rectangles to cover multiple areas

    - by joelpt
    I'm working on a tool called Quickfort for the game Dwarf Fortress. Quickfort turns spreadsheets in csv/xls format into a series of commands for Dwarf Fortress to carry out in order to plot a "blueprint" within the game. I am currently trying to optimally solve an area-plotting problem for the 2.0 release of this tool. Consider the following "blueprint" which defines plotting commands for a 2-dimensional grid. Each cell in the grid should either be dug out ("d"), channeled ("c"), or left unplotted ("."). Any number of distinct plotting commands might be present in actual usage. . d . d c c d d d d c c . d d d . c d d d d d c . d . d d c To minimize the number of instructions that need to be sent to Dwarf Fortress, I would like to find the set of largest contiguous rectangles that can be formed to completely cover, or "plot", all of the plottable cells. To be valid, all of a given rectangle's cells must contain the same command. This is a faster approach than Quickfort 1.0 took: plotting every cell individually as a 1x1 rectangle. This video shows the performance difference between the two versions. For the above blueprint, the solution looks like this: . 9 . 0 3 2 8 1 1 1 3 2 . 1 1 1 . 2 7 1 1 1 4 2 . 6 . 5 4 2 Each same-numbered rectangle above denotes a contiguous rectangle. The largest rectangles take precedence over smaller rectangles that could also be formed in their areas. The order of the numbering/rectangles is unimportant. My current approach is iterative. In each iteration, I build a list of the largest rectangles that could be formed from each of the grid's plottable cells by extending in all 4 directions from the cell. After sorting the list largest first, I begin with the largest rectangle found, mark its underlying cells as "plotted", and record the rectangle in a list. Before plotting each rectangle, its underlying cells are checked to ensure they are not yet plotted (overlapping a previous plot). We then start again, finding the largest remaining rectangles that can be formed and plotting them until all cells have been plotted as part of some rectangle. I consider this approach slightly more optimized than a dumb brute-force search, but I am wasting a lot of cycles (re)calculating cells' largest rectangles and checking underlying cells' states. Currently, this rectangle-discovery routine takes the lion's share of the total runtime of the tool, especially for large blueprints. I have sacrificed some accuracy for the sake of speed by only considering rectangles from cells which appear to form a rectangle's corner (determined using some neighboring-cell heuristics which aren't always correct). As a result of this 'optimization', my current code doesn't actually generate the above solution correctly, but it's close enough. More broadly, I consider the goal of largest-rectangles-first to be a "good enough" approach for this application. However I observe that if the goal is instead to find the minimum set (fewest number) of rectangles to completely cover multiple areas, the solution would look like this instead: . 3 . 5 6 8 1 3 4 5 6 8 . 3 4 5 . 8 2 3 4 5 7 8 . 3 . 5 7 8 This second goal actually represents a more optimal solution to the problem, as fewer rectangles usually means fewer commands sent to Dwarf Fortress. However, this approach strikes me as closer to NP-Hard, based on my limited math knowledge. Watch the video if you'd like to better understand the overall strategy; I have not addressed other aspects of Quickfort's process, such as finding the shortest cursor-path that plots all rectangles. Possibly there is a solution to this problem that coherently combines these multiple strategies. Help of any form would be appreciated.

    Read the article

  • Forcing the browser to pop a save as dialog box from a link pointing to remote url

    - by user360788
    Hi, I am building a web app that lets the user directly download files on a cdn by clicking a link. The link should point to the cdn url directly in order to minimize the load on our servers. We would like the to have the browser pop up the save as dialog box when the user clicks the link to download the file and not have the browser display the content of the file at all. So the page should not reload. However, we don't have access to setting the HTTP headers sent back from cdn. Is it possible to still pop up the save as dialog box for download using client-side code?

    Read the article

  • DB function failed with error number 1 in joomla admin panel

    - by sabuj
    When i access joomla article manager or module manager then i had faced the bellow output: 500 - An error has occurred! DB function failed with error number 1 Can't create/write to file '/tmp/#sql_57c0_0.MYD' (Errcode: 17) SQL=SELECT c.*, g.name AS groupname, cc.title AS name, u.name AS editor, f.content_id AS frontpage, s.title AS section_name, v.name AS author FROM jos_content AS c LEFT JOIN jos_categories AS cc ON cc.id = c.catid LEFT JOIN jos_sections AS s ON s.id = c.sectionid LEFT JOIN jos_groups AS g ON g.id = c.access LEFT JOIN jos_users AS u ON u.id = c.checked_out LEFT JOIN jos_users AS v ON v.id = c.created_by LEFT JOIN jos_content_frontpage AS f ON f.content_id = c.id WHERE c.state != -2 ORDER BY section_name , section_name, cc.title, c.ordering LIMIT 0, 20

    Read the article

  • running Hadoop software on office computers (when they are idle)

    - by Shahbaz
    Is there a project which helps setup a Hadoop cluster on office desktops, when they are idle? I'd like to experiment with Hadoop/MR/hbase but don't have acces to 5-10 computers. The computers at work are idle after hours and are connected to each other through a very high speed connection. What's more, data on these computers stays within our network so there is no privacy issue. In order for this to work I need a fairly light weight monitor running on each machine. When the computer has been idle for X hours, it will join the cluster. If the user logs on, it has to drop out of the cluster and return all CPU/memory back. Does something like this exist?

    Read the article

  • Deny http access to a directory, allow access from WordPress plugin

    - by luke
    Hey. I need to prevent direct access to http://www.site.com/wp-content/uploads/folder/something.pdf through the browser. However the Download Monitor plugin I am using, which allows logged in users to download the file, needs to be able to work. Trying Order Allow,Deny Deny from all Allow from all but the download links do not now work... even though (I think) they are links produced by the script e.g. http://www.site.com/wp-content/plugins/download-monitor/download.php?id=something.pdf Enter that in the address bar and you correctly get a WordPress message, 'You must be logged in to download this file.' However, if someone knows the URL where the file was uploaded http://www.site.com/wp-content/uploads/folder/something.pdf they can still access it directly. I don't know how (guesswork?) they would find the direct URL anyway, but the client wants it stopped! Thanks for any help.

    Read the article

  • Long IF tree with strings

    - by DalGr
    I have a C program which uses Lua for scripting. In order to keep readability and avoid importing several constants within the individual Lua states, I condense a large amount of functions within a simple call (such as "ObjectSet(id, "ANGLE", 45)"), by using an "action" string. To do this I have a large if tree comparing the action string to a list (such as "if(stringcompare(action, "ANGLE") ... else if (stringcompare(action, "X")... etc") This approach works well, and within the program it's not really slow, and is fairly quick to add a new action. But I kind of feel perfectionist. Is there a better way to do this in C? And having Lua in heavy use, maybe there is a way to use it for this purpose? (embedded "chunks" making a dictionary?) Although this part is mostly curiosity.

    Read the article

  • Checking values in this array, JQuery

    - by Felix Guerrero
    Hi. I have this array (JQuery) where I add all my form's controls, it looks like: var name = $("#name"), surname = $("#surname"), address = $("#address"), phone = $("#phone"), photo = $("#photo"), grade = $("#grade"), profession = $("#profession"), email = $('#email'), title = $('#title'), allFields = $([]).add(name) .add(surname) .add(address) .add(phone) .add(photo) .add(grade) .add(profession) .add(email) .add(title) .add(grade); I want to check the values of each element into the 'allFields' array with function checkingFieldsArentEmpty(){ var valid = true; for (var f in allFields){ if(allFields[f].val() === null) //if any val is null just return false return false; } //true return valid; } I need ideas in order to improve the last function. Thanks.

    Read the article

  • Send post data while opening SSE connection

    - by Prosto Trader
    I'm trying to establish SSE connection and do some long-taking actions on server-side, informing user about how it goes through SSE events. Actually, I don't understand how would I send some data along with new connection. I have to combine regular ajax with new EventSource or there is a way to transfer post data inside that event? Here is what I have so far, and I need to send pretty big JSON with the request. Is it possible or the only way to send data is GET? var source = new EventSource('/terminal/ajax-put-packet-trade-order/');

    Read the article

  • make mongrel_rails (localhost:3000) visible to a virtual machine

    - by Max Williams
    I develop rails in ubuntu and i just set up a virtualbox windows xp virtual machine for IE testing. I'd like to be able to run mongrel_rails in ubuntu and then jump into the vm to check it out, so i can jump back, make a change, jump into the vm again, reload the page and test it, etc. Is this possible? In this sort of situation in the past i've had to set up an apache server on my dev machine and run mongrel under that, in order to get an externally visible (ie visible to my local network) ip address that i then paste into the address bar of IE in the vm. Is this really necessary? Is there a simpler way? Can i do something with my /etc/hosts or sites-available files to just make up some arbitrary network address which points to localhost:3000 in ubuntu? Or something? thanks, max

    Read the article

  • concatenate multi values in one record without duplication

    - by mikehjun
    I have a dbf table like below which is the result of one to many join from two tables. I want to have unique zone values from one Taxlot id field. table name: input table tid ----- zone 1 ------ A 1 ------ A 1 ------ B 1 ------ C 2 ------ D 2 ------ E 3 ------ C Desirable output table table name: input table tid ----- zone 1 ------ A, B, C 2 ------ D, E 3 ------ C I got some help but couldn't make it to work. inputTbl = r"C:\temp\input.dbf" taxIdZoningDict = {} searchRows = gp.searchcursor(inputTbl) searchRow = searchRows.next() while searchRow: if searchRow.TID in taxIdZoningDict: taxIdZoningDict[searchRow.TID].add(searchRow.ZONE) else: taxIdZoningDict[searchRow.TID] = set() #a set prevents dulpicates! taxIdZoningDict[searchRow.TID].add(searchRow.ZONE) searchRow = searchRows.next() outputTbl = r"C:\temp\output.dbf" gp.CreateTable_management(r"C:\temp", "output.dbf") gp.AddField_management(outputTbl, "TID", "LONG") gp.AddField_management(outputTbl, "ZONES", "TEXT", "", "", "20") tidList = taxIdZoningDict.keys() tidList.sort() #sorts in ascending order insertRows = gp.insertcursor(outputTbl) for tid in tidList: concatString = "" for zone in taxIdZoningDict[tid] concatString = concatString + zone + "," insertRow = insertRows.newrow() insertRow.TID = tid insertRow.ZONES = concatString[:-1] insertRows.insertrow(insertRow) del insertRow del insertRows

    Read the article

  • Will iPhone OS4 make your life easier or harder as a lone app developer?

    - by Matt
    I am interested to hear what people feel about the new iPhone OS4 release. It is obviously very exciting having access to all the new features, apparently (from apple.com) it has over 1500 new APIs. My original thoughts were "Wow, this is awesome", and I suppose it is. I was just getting comfortable with OS 3.2 development though, and now there is a raft of additional stuff to learn in order to keep up with the pack. So I am feeling quite frustrated! Do you think, when working as an individual app developer, having access to these additional features would improve your applications or just water down the quality? I guess being giving the opportunity to improve applications and provide better features should be welcomed. I think frustration comes from struggling to keep up with the continuous changes, but thats the industry we are in I suppose! Any thoughts/comments?

    Read the article

  • Dynamically create class attributes

    - by ahojnnes
    Hi, I need to dynamically create class attributes from a DEFAULTS dictionary. defaults = { 'default_value1':True, 'default_value2':True, 'default_value3':True, } class Settings(object): default_value1 = some_complex_init_function(defaults[default_value1], ...) default_value2 = some_complex_init_function(defaults[default_value2], ...) default_value3 = some_complex_init_function(defaults[default_value3], ...) I could also achive this by having sth. like __init__ for class creation, in order to dynamically create these attributes from dictionary and save a lot of code and stupid work. How would you do this? Thank you very much in advance!

    Read the article

  • How do you use multiple versions of the same R package?

    - by Richie Cotton
    In order to be able to compare two versions of a package, I need to able to choose which version of the package that I load. R's package system is set to by default to overwrite existing packages, so that you always have the latest version. How do I override this behaviour? My thoughts so far are: I could get the package sources, edit the descriptions to give different names and build, in effect, two different packages. I'd rather be able to work directly with the binaries though, as it is much less hassle. I don't necessarily need to have both versions of the packages loaded at the same time (just installed somewhere at the same time). I could perhaps mess about with Sys.getenv('R_HOME') to change the place where R installs the packages, and then .libpaths() to change the place where R looks for them. This seems hacky though, so does anyone have any better ideas?

    Read the article

  • How do I calculate a good hash code for a list of strings?

    - by Ian Ringrose
    Background: I have a short list of strings. The number of strings is not always the same, but are nearly always of the order of a “handful” In our database will store these strings in a 2nd normalised table These strings are never changed once they are written to the database. We wish to be able to match on these strings quickly in a query without the performance hit of doing lots of joins. So I am thinking of storing a hash code of all these strings in the main table and including it in our index, so the joins are only processed by the database when the hash code matches. So how do I get a good hashcode? I could: Xor the hash codes of all the string together Xor with multiply the result after each string (say by 31) Cat all the string together then get the hashcode Some other way So what do people think? (If you care we are using .NET and SqlServer)

    Read the article

  • Building a world matrix

    - by DeadMG
    When building a world projection matrix from scale, rotate, translate matrices, then the translation matrix must be the last in the process, right? Else you'll be scaling or rotating your translations. Do scale and rotate need to go in a specific order? Right now I've got std::for_each(objects.begin(), objects.end(), [&, this](D3D93DObject* ptr) { D3DXMATRIX WVP; D3DXMATRIX translation, rotationX, rotationY, rotationZ, scale; D3DXMatrixTranslation(&translation, ptr->position.x, ptr->position.y, ptr->position.z); D3DXMatrixRotationX(&rotationX, ptr->rotation.x); D3DXMatrixRotationY(&rotationY, ptr->rotation.y); D3DXMatrixRotationZ(&rotationZ, ptr->rotation.z); D3DXMatrixScaling(&translation, ptr->scale.x, ptr->scale.y, ptr->scale.z); WVP = rotationX * rotationY * rotationZ * scale * translation * ViewProjectionMatrix; });

    Read the article

  • Joomla , forms with upload and custom field from inside the administration panel

    - by Stathis
    I want a plugin for joomla like jforms or chronoforms in order to make a form to upload videos along with other custom fields to db and manage them. The only problem is I want this functionality to be made from inside the administrator console and not to appear on a page at my site's frontend. My site does not have a login service , so I need to make the admin able to login to administration panel and from there to upload and manage videos. Do you know of a plugin wich supports this functionality? Thank you in advance.

    Read the article

  • Display rows from MySQL where a datetime is within the next hour

    - by alex
    I always have trouble with complicated SQL queries. This is what I have $query = ' SELECT id, name, info, date_time FROM acms_events WHERE date_time = DATE_SUB(NOW(), INTERVAL 1 HOUR) AND active = 1 ORDER BY date_time ASC LIMIT 6 '; I want to get up to 6 rows that are upcoming within the hour. Is my query wrong? It does not seem to get events that are upcoming within the next hour when I test it. What is the correct syntax for this? Thanks

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >