Search Results

Search found 1724 results on 69 pages for 'belongs on superuser'.

Page 37/69 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • sp_help

    - by David-Betteridge
    One of the nice things about SQL Server Management Studio (SSMS) is that you can highlight a table name in a script and press Alt + F1 to perform sp_help on it. Unfortunately I've never been able to use that feature as the majority of the tables in our product belong to a schema other than dbo.    On a long train journey back to York I wondered if I could solve this problem by writing my own replacement for sp_help (which I’ve called sp_help_table_schemas).  My version works by first checking the system tables to find out which schemas the table belongs to SELECT s.Name   --Find the schema FROM sys.schemas s  JOIN sys.tables t on t.schema_id = s.schema_id  WHERE t.name = 'Orders'It then dynamically calls the standard sp_help method but this time supplying the table owner as well.SET @cmd = 'EXEC sp_help ''' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@ObjectName) + ''' ;' ;           EXEC ( @cmd )Once I had proved the basics worked I wrapped it up into a stored procedure and deployed it to the master database on my laptop.  It was then just a question of going into Tools à Options within SSMS and defining the keyboard short cutA couple of notes You can’t amend the existing Alt+F1 entry to I went with Ctrl+F1.  You need to open new query window for the change to be picked upSo I can now highlight a table name and press Ctrl+F1 The completed script is attached.   Thanks go to Martin Bell who reviewed my stored procedure and give some valuable advice.

    Read the article

  • Thick models Vs. Business Logic, Where do you draw the distinction?

    - by TokenMacGuy
    Today I got into a heated debate with another developer at my organization about where and how to add methods to database mapped classes. We use sqlalchemy, and a major part of the existing code base in our database models is little more than a bag of mapped properties with a class name, a nearly mechanical translation from database tables to python objects. In the argument, my position was that that the primary value of using an ORM was that you can attach low level behaviors and algorithms to the mapped classes. Models are classes first, and secondarily persistent (they could be persistent using xml in a filesystem, you don't need to care). His view was that any behavior at all is "business logic", and necessarily belongs anywhere but in the persistent model, which are to be used for database persistence only. I certainly do think that there is a distinction between what is business logic, and should be separated, since it has some isolation from the lower level of how that gets implemented, and domain logic, which I believe is the abstraction provided by the model classes argued about in the previous paragraph, but I'm having a hard time putting my finger on what that is. I have a better sense of what might be the API (which, in our case, is HTTP "ReSTful"), in that users invoke the API with what they want to do, distinct from what they are allowed to do, and how it gets done. tl;dr: What kinds of things can or should go in a method in a mapped class when using an ORM, and what should be left out, to live in another layer of abstraction?

    Read the article

  • Which shopping cart / ecommerce platform to choose?

    - by fabien7474
    I need to build an ecommerce website within a tight budget and schedule. Of course, I have never done that before, so I have googled out what my solutions are and I have concluded that the following were not valid candidates anymore : Magento : Steep learning curve osCommerce : old, bad design, buggy and not user-friendly Zencart, CRE Loaded, CubeCart : based on osCommerce Virtuemart, uberCart, eCart : based on CMS (Joomal, Drupal, WordPress) that is not necessary for my use-case So I finally narrowed down my choices to these solutions : PrestaShop : easy-to-use, great templating engine (smarty) but many modules are not free buy yet indispensable OpenCart : security issues and not a great support from the main developer. See here and here. So, as you can see, I am a little bit confused and if you can help me choosing an easy-to-use, lightweight and cheap (not-necessarily free) ecommerce solution, I would really appreciate. By the way, I am a Java/Grails programmer but I am also familiar with PHP and .NET. (not with Python or Ruby/Rails) EDIT: It seems that this question is more appropriate for the Webmaster StackExchange site. So please move this question to where it belongs (I cannot do that) instead of downvoting it. BTW, I have found out a question quite similar on SO (http://stackoverflow.com/questions/3315638/php-ecommerce-system-which-one-is-easiest-to-modify) which is quite popular.

    Read the article

  • Testing ASP.NET security in Firefox

    - by blahblah
    I'm not sure whether this question belongs on StackOverflow or SuperUser, but here goes nothing... I'm trying to test out some basic security problems on my personal ASP.NET website to see exactly how the custom validators, etc. work when tampering with the data. I've been looking at the Firefox extension TamperData which seems to do the trick, but it doesn't feel very professional at all. The issues I'm having with TamperData is that the textbox for the POST data is way too small to hold the ASP.NET view-state, so I have to copy that data into Emacs and then back again to be productive at all. I also don't like that there doesn't seem to be an option to only tamper with data which is from/to localhost. Any ideas on better extensions for the task or better methods to test it?

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • Scanline filling of polygons that share edges and vertices

    - by Belgin
    In this picture (a perspective projection of an icosahedron), the scanline (red) intersects that vertex at the top. In an icosahedron each edge belongs to two triangles. From edge a, only one triangle is visible, the other one is in the back. Same for edge d. Also, in order to determine what color the current pixel should be, each polygon has a flag which can either be 'in' or 'out', depending upon where on the scanline we currently are. Flags are flipped according to the intersection of the scanline with the edges. Now, as we go from a to d (because all edges are intersected with the scanline at that vertex), this happens: the triangle behind triangle 1 and triangle 1 itself are set 'in', then 2 is set in and 1 is 'out', then 3 is set 'in', 2 is 'out' and finally 3 is 'out' and the one behind it is set 'in', which is not the desired behavior because we only need the triangles which are facing us to be set 'in', the rest should be 'out'. How do process the edges in the Active Edge List (a list of edges that are currently intersected by the scanline) so the right polys are set 'in'? Also, I should mention that the edges are unique, which means there exists an array of edges in the data structure of the icosahedron which are pointed to by edge pointers in each of the triangles.

    Read the article

  • Difference between bug, defect and flaw

    - by Hossein
    I was reading "Software Security: Building Security In" and in the first chapter I faced with 3 terms: bug, defect and flaw. The author gave a definition for each of them but I couldn't completely understand these. Can someone give me some examples for each term? What is a defect and what is a flaw? I think I know what bug is, a bug is a malfunction of a part of system which produces undesirable result, be it crashing on a wrong input or miscalculating a series of computations. Can someone elaborate more and correct me if I am wrong in this? UPDATE To be more precise in the book I mentioned above, they (the words) are presented in a way to make a distinction, that's why I am asking to know more. In that book there are some examples denoting which sample belongs to what and which category. For example: Buffer overflow is said to be a bug and issues in method overriding (subclassing issues) is being related to flaw category. Again race condition handling issues are considered bugs and Error-handling problems (fails open) are told to be flaws! I want more elaboration on these regards.

    Read the article

  • Delete link to file without clearing readonly bit

    - by Joshua
    I have a set of files with multiple links to them. The files are owned by TFS source control but other links to them are made to them. How do I delete the additional links without clearing the readonly bit. It's safe to assume: The files have more than one link to them You are not deleting the name owned by TFS There are no potential race conditions You have ACL full control for the files The machine will not lose power, nor will your program be killed unless it takes way too long. It's not safe to assume: The readonly bit is set (don't set it if its not) You can leave the readonly bit clear if you encounter an error and it was initially set Do not migrate to superuser -- if migrated there the answer is impossible because no standard tool can do this.

    Read the article

  • SFTP permission denied on files owned by www-data

    - by Charles Roper
    I have a pretty standard server set up running Apache and PHP. An app I am running creates files and these are owned by the Apache user www-data. Files that I upload via SFTP are owned by my own user charlesr. All files are part of the www-data group. My problem is that I cannot modify or overwrite any of the files via SFTP which are owned by www-data, even though charlesr is part of the www-data group. I can modify the files no problem via a SSH session. So I'm not sure what to do. How do I give my SFTP session permissions to modify www-data owned files? For a bit of background, these are the notes I wrote for myself when setting-up the server: Now set up permissions on `/var/www` where your files are served from by default: $ sudo adduser $USER www-data $ sudo chgrp -R www-data /var/www $ sudo chmod -R g+rw /var/www $ sudo chmod -R g+s /var/www Now log out and log in again to make the changes take hold. The previous set of commands does the following: 1. adds the current user ($USER) to the `www-data` group; 2. changes `/var/www` to belong to the `www-data` group; 3. adds read/write permissions to the group that `/var/www` belongs to; 4. sets the SGID bit on `/var/www`; this final point bears some explaining. And then I go on to explain to myself what setting the SGID bit means (i.e. all files created in /var/www become part of the www-data group automatically). Btw, nothing feels sweeter than going back and reading your own detailed notes on the what, how and why of your own server set up when trying to troubleshoot like this - I recommend it highly to all beginners like myself :-)

    Read the article

  • How to manage long running background threads and report progress with DDD

    - by Mr Happy
    Title says most of it. I have found surprising little information about this. I have a long running operation of which the user wants to see the progress (as in, item x of y processed). I also need to be able to pause and stop the operation. (Stopping doesn't rollback the items already processed.) The thing is, it's not that each item takes a long time to get processed, it's that that there are usually a lot of items. And what I've read about so far is that it's somewhat of an anti-pattern to put something like a queue in the DB. I currently don't have any messaging system in place, and I've never worked with one either. Another thing I read somewhere is that progress reporting is something that belongs in the application layer, but it didn't go into the details. So having said all this, what I have in mind is the following. User request with list of items enters the application layer. Application layer gets some information from the domain needed to process the items. Application layer passes the items and the information off to some domain service (should the implementation of this service belong in the infrastructure layer?) This service spins up a worker thread with callbacks for both progress reporting and pausing/stopping it. This worker thread will process each item in it's own UoW. This means the domain information from earlier needs to be stored in some DTO. Since nothing is really persisted, the service should be singleton and thread safe Whenever a user requests a progress report or wants to pause/stop the operation, the application layer will ask the service. Would this be a correct solution? Or am I at least on the right track with this? Especially the singleton and thread safe part makes the whole thing feel icky.

    Read the article

  • How can I call a URL as a cron job in Webmin?

    - by EmmyS
    (Possibly this belongs on stackoverflow, although it's not really a programming issue since the code works when run directly. If it needs to be moved, though, no problem.) I have a PHP file (which consumers a National Weather Service web service via SOAP, if it matters) that I need to run on a scheduled basis. I'm trying to set up a cron job in Webmin. If I use an absolute path to the file in the Command field, when I run it I get some strange errors: /var/www/html/mysite.com/test/ndfdXMLclient.php: line 1: ?php: No such file or directory /var/www/html/mysite.com/test/ndfdXMLclient.php: line 2: //: is a directory /var/www/html/mysite.com/test/ndfdXMLclient.php: line 3: //DOCUMENTATION: No such file or directory /var/www/html/mysite.com/test/ndfdXMLclient.php: line 4: //: is a directory /var/www/html/mysite.com/test/ndfdXMLclient.php: line 5: syntax error near unexpected token `"running client code",' /var/www/html/mysite.com/test/ndfdXMLclient.php: line 5: `error_log("running client code", 1, "[email protected]");' The actual code in my file for those 5 lines looks like this: <?php // *************************************************************************** //DOCUMENTATION FROM WEATHER.GOV ALL STORED IN xmlClientComments.txt // *************************************************************************** error_log("running client code", 1, "[email protected]"); The code runs perfectly fine when I run it directly in my browser, so why doesn't webmin recognize it as code? (The same thing happens if I enter the actual URL in the command field - http://mysite.com/test/ndfdXMLclient.php.) I've never worked with webmin before; most of our hosts' cron control panels allow cron jobs to run PHP files like this with no issue. Is there some trick to getting webmin to read php as actual php?

    Read the article

  • Parallelize incremental processing in Tabular #ssas #tabular

    - by Marco Russo (SQLBI)
    I recently came in a problem trying to improve the parallelism of Tabular processing. As you know, multiple tables can be processed in parallel, whereas the processing of several partitions within the same table cannot be parallelized. When you perform an incremental update by adding only new rows to existing table, what you really do is adding rows to a partition, so adding rows to many tables means adding rows to several partitions. The particular condition you have in this case is that every partition in which you add rows belongs to a different table. Adding rows implies using the ProcessAdd command; its QueryBinding parameter specifies a SQL syntax to read new rows, otherwise the original query specified for the partition will be used, and it could generate duplicated data if you don’t have a dynamic behavior on the SQL side. If you create the required XMLA code manually, you will find that the QueryBinding node that should be part of the ProcessAdd command has to be moved out from ProcessAdd in case you are using a Batch command with more than one Process command (which is the reason why you want to use a single batch: run multiple process operations in parallel!). If you use AMO (Analysis Management Objects) you will find that this combination is not supported, even if you don’t have a syntax error compiling the code, but you might obtain this error at execution time: The syntax for the 'Process' command is incorrect. The 'Bindings' keyword cannot appear under a 'Process' command if the 'Process' command is a part of a 'Batch' command and there are more than one 'Process' commands in the 'Batch' or the 'Batch' command contains any out of line related information. In this case, the 'Bindings' keyword should be a part of the 'Batch' command only. If this is happening to you, the best solution I’ve found is manipulating the XMLA code generated by AMO moving the Binding nodes in the right place. A more detailed description of the issue and the code required to send a correct XMLA batch to Analysis Services is available in my article Parallelize ProcessAdd with AMO. By the way, the same technique (and code) can be used also if you have the same problem in a Multidimensional model.

    Read the article

  • Can't find a Wordpress image/photo gallery plugin

    - by mgroves
    I've been looking at Wordpress plugins for photo galleries (so maybe this is for superuser.com), and I've been very frustrated so far. It seems like what I'd like to do would be a very common use case: Admin: Be able to upload multiple pictures (at a time) Admin: Be able to assign a "gallery" to those pictures as I upload them User: Be able to go to a page with a (paged) list of all galleries User: Be able to click on gallery and view the images (again, probably paged) in that gallery User: Be able to click on an image to get larger/largest sizes User: Be able to leave comments on individual pictures (this is a "nice to have") The images/galleries could be totally independent of posts/pages, but it would be nice to be able to embed those images/galleries into posts/pages when necessary. Is there anything out there like this that I'm missing? I've tried a handful of plugins and none of them seem to be for a use case anywhere close to what I'm looking for. One of the reasons I'm trying to use Wordpress is to reduce time spent coding everything I want.

    Read the article

  • Can this package be recompiled

    - by Saif Bechan
    Hello I asked this question to superuser but I did not get a good question there and i really need the answer. I know some of you here can answer this question. I have installed nginx via yum. Now I want to add a module, but I have to compile the source again and include the the new module. But i can't find the source. Does someone know what I have to do to recompile the source and get the module in.

    Read the article

  • Inconsistency between date-time in terminal and clock

    - by Franck
    I can't figure out why I have 5 hours difference between GUI clock and date command in terminal. My bios clock is set to GMT... Any ideas ? franck@franck-ThinkPad-T61:~$ date mercredi 11 avril 2012, 02:48:47 (UTC-0500) franck@franck-ThinkPad-T61:~$ sudo dpkg-reconfigure tzdata Current default time zone: 'Europe/Paris' Local time is now: Wed Apr 11 09:49:02 CEST 2012. Universal Time is now: Wed Apr 11 07:49:02 UTC 2012. franck@franck-ThinkPad-T61:~$ tail /etc/timezone Europe/Paris franck@franck-ThinkPad-T61:~$ date mercredi 11 avril 2012, 02:49:21 (UTC-0500) franck@franck-ThinkPad-T61:~$ sudo dpkg-reconfigure --frontend noninteractive tzdata Current default time zone: 'Europe/Paris' Local time is now: Wed Apr 11 09:49:27 CEST 2012. Universal Time is now: Wed Apr 11 07:49:27 UTC 2012. franck@franck-ThinkPad-T61:~$ date mercredi 11 avril 2012, 02:49:30 (UTC-0500) franck@franck-ThinkPad-T61:~$ sudo cat /etc/default/rcS # # /etc/default/rcS # # Default settings for the scripts in /etc/rcS.d/ # # For information about these variables see the rcS(5) manual page. # # This file belongs to the "initscripts" package. TMPTIME=0 SULOGIN=no DELAYLOGIN=no UTC=yes VERBOSE=no FSCKFIX=no franck@franck-ThinkPad-T61:~$ sudo hwclock --show mer. 11 avril 2012 07:49:48 CDT -0.555705 secondes

    Read the article

  • How do I change the effective user of psql?

    - by gvkv
    I'm using psql to run a simple set of COPY statements contained in a file: psql -d mydb -f 'wbf_queries.data.sql' where wbf_queries.data.sql contains lines: copy <my_query> to '/home/gvkv/mydata' delimiter ',' null ''; ... but I get a permission denied error: ... ERROR: could not open file ... for writing: Permission denied I'm connecting under my user account (gvkv) which is also a superuser in PostgreSQL. Obviously, psql is running under a different (effective) user but I don't know how to change this. Can it be done within psql or do I need some unix-fu?

    Read the article

  • Is there a technical way to speed up a general program above current PC speed limit?

    - by Maksee
    Let's imagine I developed a Windows console application implementing some algorithm calculating something. Let's say it doesn't use any threads, just straightforward linear approach with ifs, loops and so on. Is there any technical way to make if run it 100x times faster than on the most advanced current PC? For example one of the way would be to run it on a super computer that emulates i386 faster than any of the existing PCs. But in this case the question what computer and does it really have ability to emulate Windows. In other words, is there real examples of such approach? Although in general it looks useless, but if there is a way, one could develop some program on his general home computer and pay for running it much faster on some other hardware. I suppose that this question could be asked on superuser.com, but since there are possible specific with such things as assembler instructions or threads, I thought that stackoverflow.com is better

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

  • Buffer System For Items

    - by Ohmages
    I am going to reference this image of what I want to accomplish in JavaScript. This is the Diablo buffer system. This question may be a bit advanced (or possibly not even allowed). But I was wondering how you might go about implementing this type of system in a JavaScript game. Currently to implement such a system in JavaScript escapes me, and I am turning to SO to get some suggestions, ideas, and hopefully some insight in how I could accomplish this without being to costly on the CPU. Some thoughts of mine for implementing such a system would be to: Create DIVS within a DIV that hold each position of the inventory Go through each item you own in a container and see which DIV it belongs to Make said item images the DIVs image This type of system might possibly work if ALL items were 1x1, but for this example its not going to work out. I am at a complete lost of ideas how to even accomplish this. Although, maybe rendering directly to the canvas and checking mouse cords could work, there would more than likely be A HUGE annoyance when checking if other items are overlapping each other (meaning you cant place the item down, and possibly switching item with the cursor item ). That said, what am I left with? Do I need to makeshift my own hack system with messy code, or is there some source out there (that I don't know about) that has replicated this type of system in their own game. I would be very grateful to get some replies on how you might go about doing this, and will accept answers that can logically explain how you might implement such a system (code is not required). P.S. Id like to use pure JavaScript, and nothing else (even though it might be "reinventing the wheel", I also like to learn).

    Read the article

  • Designs for outputting to a spreadsheet

    - by Austin Moore
    I'm working on a project where we are tasked to gather and output various data to a spreadsheet. We are having tons of problems with the file that holds the code to write the spreadsheet. The cell that the data belongs to is hardcoded, so anytime you need to add anything to the middle of the spreadsheet, you have to increment the location for all the fields after that in the code. There are random blank rows, to add padding between sections, and subsections within the sections, so there's no real pattern that we can replicate. Essentially, anytime we have to add or change anything to the spreadsheet it requires a many long and tedious hours. The code is all in this one large file, hacked together overtime in Perl. I've come up with a few OO solutions, but I'm not too familiar with OO programming in Perl and all my attempts at it haven't been great, so I've shied away from it so far. I've suggested we handle this section of the program with a more OO friendly language, but we can't apparently. I've also suggested that we scrap the entire spreadsheet idea, and just move to a webpage, but we can't do that either. We've been working on this project for a few months, and every time we have to change that file, we all dread it. I'm thinking it's time to start some refactoring. However, I don't even know what could make this file easier to work with. The way the output is formatted makes it so that it has to be somewhat hardcoded. I'm wondering if anyone has insight on any design patterns or techniques they have used to tackle a similar problem. I'm open to any ideas. Perl specific answers are welcome, but I am also interested in language-agnostic solutions.

    Read the article

  • Using Qt with custom MinGW

    - by ereOn
    Hi, I don't know if this question would fit better on superuser.com, but since it's rather compiler related, I give it a try here. I have to use Qt with a specific version of gcc (4.5). I downloaded the last official Qt release for Windows (Vista, 32 bits version) and didn't install the shipped MinGW version; I just installed the Qt libraries/binaries. In a console, when I type qmake && make, make fails, complaining that 'g++' is not recognized as an internal command. If I type g++ in the same console, I however have the following output: g++: no input files So g++ is definitely recognized. For those who may ask, both the Qt binaries directory and MinGW binaries directory are in the system PATH environment variable. What could be wrong here ?

    Read the article

  • Modular Database Structures

    - by John D
    I have been examining the code base we use in work and I am worried about the size the packages have grown to. The actual code is modular, procedures have been broken down into small functional (and testable) parts. The issue I see is that we have 100 procedures in a single package - almost an entire domain model. I had thought of breaking these packages down - to create sub domains that are centered around the procedure relationships to other objects. Group a bunch of procedures that have 80% of their relationships to three tables etc. The end result would be a lot more packages, but the packages would be smaller and I feel the entire code base would be more readable - when procedures cross between two domain models it is less of a struggle to figure which package it belongs to. The problem I now have is what the actual benefit of all this would really be. I looked at the general advantages of modularity: 1. Re-usability 2. Asynchronous Development 3. Maintainability Yet when I consider our latest development, the procedures within the packages are already reusable. At this advanced stage we rarely require asynchronous development - and when it is required we simply ladder the stories across iterations. So I guess my question is if people know of reasons why you would break down classes rather than just the methods inside of classes? Right now I do believe there is an issue with these mega packages forming but the only benefit I can really pin down to break them down is readability - something that experience gained from working with them would solve.

    Read the article

  • I need an approach to the problem of preventing inserting duplicate records into the database

    - by Maurice
    Apologies is this question is asked on the incorrect "stack" A webservice that I call returns a list of data. The data from the webservice is updated periodically, so a call to the webservice done in one hour could return the same data as a call done in an hour. Also, the data is returned based on a start and end date. We have multiple users that can run the webservice search, and duplicate data is most likely to be returned (especially for historical data). However I don't want to insert this duplicate data in the database. I've created a db table in which the data is stored (most important columns are) Id int autoincrement PK Date date not null --The date to which the data set belongs. LastUpdate date not null --The date the data set was last updated. UserName varchar(50) --The name of the user doing the search. I use sql server 2008 express with c# 4.0 and visual studio 2010. Entity Framework is used as the ORM. If stored procedures could be avoided in the proposed solution, then that will be a plus. Another way of looking interpreting what I'm asking a solution for is as follows: I have a million unique records in my table. A user does a new search. The search results from the user contains around 300k of the data that is already in the db. An efficient solution to finding an inserting only the unique records is needed.

    Read the article

  • PHP Image gallery that integrates well into custom CMS

    - by Thorarin
    I've been trying to find an image gallery that plays nice with our custom CMS. I've evaluated a number of them, but none of them seems to have the feature list that I would like: Run on LAMP environment Free software or low license costs (the website belongs to a non-profit organisation) Multi-user support Multiple albums. We're posting concert pictures, and would like an album per event. Pluggable authentication system. I want to reuse the accounts we have for our CMS. Permissions can be done inside the gallery itself, but I want to have a single sign on solution in a maintainable manner, by writing my own plugin/add-on for the software. Upload support (multiple images at the same time) And preferrably also: Can be integrated into a PHP page layout without IFRAMEs Automatic resizing of uploaded images to a maximum size Ability for visitors to place comments This combination is proving hard to find, especially the authentication requirement. I don't want to mess around all over the place in the source code to make it use the existing authentication. A plugin would be ideal, but alternatively a well thought out software design that allows for maintainable surgical changes would be acceptable. Any suggestions on which software I should take a closer look into?

    Read the article

  • Issue with Visual C++ 2010 (Express) External Tools command

    - by espais
    I posted this on SuperUser...but I was hoping the pros here at SO might have a good idea about how to fix this as well.... Normally we develop in VS 2005 Pro, but I wanted to give VS 2010 a spin. We have custom build tools based off of GNU make tools that are called when creating an executable. This is the error that I see whenever I call my external tool: ...\gnu\make.exe): * couldn't commit memory for cygwin heap, Win32 error 487 The caveat is that it still works perfectly fine in VS2005, as well as being called straight from the command line. Also, my external tool is setup exactly the same as in VS 2005. Is there some setting somewhere that could cause this error to be thrown?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >