Search Results

Search found 13859 results on 555 pages for 'non functional'.

Page 475/555 | < Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >

  • Getting moved out of a development job

    - by Jay
    I'm a year out of college and I started my first dev job at a small (<15 people) company several months ago. It was an internship position that recently turned full time. The position started out as development but for full time I got offered a grab bag of positions: qa, docs, call support and some dev work. It's clear that my employers feel I am lacking dev skills, which is true. I did not major in CS in college and did not have much dev experience. However, I'm convinced that I can be a good developer and I will be a good developer once given the chance to write lots of code. My question is simple: what should I do? As I see it, there are two options. Work hard in the non-dev duties so that my employers may eventually give me significant dev responsibilities. Look for a new job where I will be a developer first and an all purpose guy second (if at all). Thanks guys.

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Item in multiple lists

    - by Evan Teran
    So I have some legacy code which I would love to use more modern techniques. But I fear that given the way that things are designed, it is a non-option. The core issue is that often a node is in more than one list at a time. Something like this: struct T { T *next_1; T *prev_1; T *next_2; T *prev_2; int value; }; this allows the core have a single object of type T be allocated and inserted into 2 doubly linked lists, nice and efficient. Obviously I could just have 2 std::list<T*>'s and just insert the object into both...but there is one thing which would be way less efficient...removal. Often the code needs to "destroy" an object of type T and this includes removing the element from all lists. This is nice because given a T* the code can remove that object from all lists it exists in. With something like a std::list I would need to search for the object to get an iterator, then remove that (I can't just pass around an iterator because it is in several lists). Is there a nice c++-ish solution to this, or is the manually rolled way the best way? I have a feeling the manually rolled way is the answer, but I figured I'd ask.

    Read the article

  • IIS to SQL Server kerberos auth issues

    - by crosan
    We have a 3rd party product that allows some of our users to manipulate data in a database (on what we'll call SvrSQL) via a website on a separate server (SvrWeb). On SvrWeb, we have a specific, non-default website setup for this application so instead of going to http://SvrWeb.company.com to get to the website we use http://application.company.com which resolves to SvrWeb and the host headers resolve to the correct website. There is also a specific application pool set up for this site which uses an Active Directory account identity we'll call "company\SrvWeb_iis". We're setup to allow delegation on this account and to allow it to impersonate another login which we want it to do. (we want this account to pass along the AD credentials of the person signed into the website to SQL Server instead of a service account. We also set up the SPNs for the SrvWeb_iis account via the following command: setspn -A HTTP/SrvWeb.company.com SrvWeb_iis The website pulls up, but the section of the website that makes the call to the database returns the message: Cannot execute database query. Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. I thought we had the SPN information set up correctly, but when I check the security event log on SrvWeb I see entries of my logging in, but it seems to be using NTLM and not kerberos: Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Any ideas or articles that cover this setup in detail would be extremely appreciated! If it helps, we are using SQL Server 2005, and both the web and SQL servers are Windows 2003.

    Read the article

  • How can variadic char template arguments from user defined literals be converted back into numeric types?

    - by Pubby
    This question is being asked because of this one. C++11 allows you to define literals like this for numeric literals: template<char...> OutputType operator "" _suffix(); Which means that 503_suffix would become <'5','0','3'> This is nice, although it isn't very useful in the form it's in. How can I transform this back into a numeric type? This would turn <'5','0','3'> into a constexpr 503. Additionally, it must also work on floating point literals. <'5','.','3> would turn into int 5 or float 5.3 A partial solution was found in the previous question, but it doesn't work on non-integers: template <typename t> constexpr t pow(t base, int exp) { return (exp > 0) ? base * pow(base, exp-1) : 1; }; template <char...> struct literal; template <> struct literal<> { static const unsigned int to_int = 0; }; template <char c, char ...cv> struct literal<c, cv...> { static const unsigned int to_int = (c - '0') * pow(10, sizeof...(cv)) + literal<cv...>::to_int; }; // use: literal<...>::to_int // literal<'1','.','5'>::to_int doesn't work // literal<'1','.','5'>::to_float not implemented

    Read the article

  • row number over text column sort

    - by Marty Trenouth
    I'm having problems with dynamic sorting using ROW Number in SQL Server. I have it working but it's throwing errors on non numeric fields. What do I need to change to get sorts with Alpha Working??? ID Description 5 Test 6 Desert 3 A evil Ive got a Sql Prodcedure CREATE PROCEDURE [CRUDS].[MyTable_Search] -- Add the parameters for the stored procedure here -- Full Parameter List @ID int = NULL, @Description nvarchar(256) = NULL, @StartIndex int = 0, @Count int = null, @Order varchar(128) = 'ID asc' AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here Select * from ( Select ROW_NUMBER() OVER (Order By case when @Order = 'ID asc' then [TableName].ID when @Order = 'Description asc' then [TableName].Description end asc, case when @Order = 'ID desc' then [TableName].ID when @Order = 'Description desc' then [TableName].Description end desc ) as row, [TableName].* from [TableName] where (@ID IS NULL OR [TableName].ID = @ID) AND (@Description IS NULL OR [TableName].Description = @Description) ) as a where row > @StartIndex and (@Count is null or row <= @StartIndex + @Count) order by case when @Order = 'ID asc' then a.ID when @Order = 'Description asc' then a.Description end asc, case when @Order = 'ID desc' then a.ID when @Order = 'Description desc' then a.Description end desc END

    Read the article

  • GetIpAddrTable() leaks memory. How to resolve that?

    - by Stabledog
    On my Windows 7 box, this simple program causes the memory use of the application to creep up continuously, with no upper bound. I've stripped out everything non-essential, and it seems clear that the culprit is the Microsoft Iphlpapi function "GetIpAddrTable()". On each call, it leaks some memory. In a loop (e.g. checking for changes to the network interface list), it is unsustainable. There seems to be no async notification API which could do this job, so now I'm faced with possibly having to isolate this logic into a separate process and recycle the process periodically -- an ugly solution. Any ideas? // IphlpLeak.cpp - demonstrates that GetIpAddrTable leaks memory internally: run this and watch // the memory use of the app climb up continuously with no upper bound. #include <stdio.h> #include <windows.h> #include <assert.h> #include <Iphlpapi.h> #pragma comment(lib,"Iphlpapi.lib") void testLeak() { static unsigned char buf[16384]; DWORD dwSize(sizeof(buf)); if (GetIpAddrTable((PMIB_IPADDRTABLE)buf, &dwSize, false) == ERROR_INSUFFICIENT_BUFFER) { assert(0); // we never hit this branch. return; } } int main(int argc, char* argv[]) { for ( int i = 0; true; i++ ) { testLeak(); printf("i=%d\n",i); Sleep(1000); } return 0; }

    Read the article

  • How To Join Tables from Two Different Contexts with LINQ2SQL?

    - by RSolberg
    I have 2 data contexts in my application (different databases) and need to be able to query a table in context A with a right join on a table in context B. How do I go about doing this in LINQ2SQL? Why?: We are using a SaaS product for tracking our time, projects, etc. and would like to send new service requests to this product to prevent our team from duplicating data entry. Context A: This db stores service request information. It is a third party DB and we are not able to make changes to the structure of this DB as it could have unintended non-supportable consequences downstream. Context B: This data stores the "log" data of service requests that have been processed. My team and I have full control over this DB's structure, etc. Unprocessed service requests should find their way into this DB and another process will identify it as not being processed and send the record to the SaaS product. This is the query that I am looking to modify. I was able to do a !list.Contains(c.swHDCaseId) initially, but this cannot handle more than 2100 items. Is there a way to add a join to the other context? var query = (from c in contextA.Cases where monitoredInboxList.Contains(c.INBOXES.inboxName) select new { //setup fields here... });

    Read the article

  • Randomly sorting an array

    - by Cam
    Does there exist an algorithm which, given an ordered list of symbols {a1, a2, a3, ..., ak}, produces in O(n) time a new list of the same symbols in a random order without bias? "Without bias" means the probability that any symbol s will end up in some position p in the list is 1/k. Assume it is possible to generate a non-biased integer from 1-k inclusive in O(1) time. Also assume that O(1) element access/mutation is possible, and that it is possible to create a new list of size k in O(k) time. In particular, I would be interested in a 'generative' algorithm. That is, I would be interested in an algorithm that has O(1) initial overhead, and then produces a new element for each slot in the list, taking O(1) time per slot. If no solution exists to the problem as described, I would still like to know about solutions that do not meet my constraints in one or more of the following ways (and/or in other ways if necessary): the time complexity is worse than O(n). the algorithm is biased with regards to the final positions of the symbols. the algorithm is not generative. I should add that this problem appears to be the same as the problem of randomly sorting the integers from 1-k, since we can sort the list of integers from 1-k and then for each integer i in the new list, we can produce the symbol ai.

    Read the article

  • Batch insert mode with hibernate and oracle: seems to be dropping back to slow mode silently

    - by Chris
    I'm trying to get a batch insert working with Hibernate into Oracle, according to what i've read here: http://docs.jboss.org/hibernate/core/3.3/reference/en/html/batch.html , but with my benchmarking it doesn't seem any faster than before. Can anyone suggest a way to prove whether hibernate is using batch mode or not? I hear that there are numerous reasons why it may silently drop into normal mode (eg associations and generated ids) so is there some way to find out why it has gone non-batch? My hibernate.cfg.xml contains this line which i believe is all i need to enable batch mode: <property name="jdbc.batch_size">50</property> My insert code looks like this: List<LogEntry> entries = ..a list of 100 LogEntry data classes... Session sess = sessionFactory.getCurrentSession(); for(LogEntry e : entries) { sess.save(e); } sess.flush(); sess.clear(); My 'logentry' class has no associations, the only interesting field is the id: @Entity @Table(name="log_entries") public class LogEntry { @Id @GeneratedValue public Long id; ..other fields - strings and ints... However, since it is oracle, i believe the @GeneratedValue will use the sequence generator. And i believe that only the 'identity' generator will stop bulk inserts. So if anyone can explain why it isn't running in batch mode, or how i can find out for sure if it is or isn't in batch mode, or find out why hibernate is silently dropping back to slow mode, i'd be most grateful. Thanks

    Read the article

  • logic before dispatcher + controller?

    - by Spoonface
    I believe for a typical MVC web application the router / dispatcher routine is used to decide which controller is loaded based primarily on the area requested in the url by the user. However, in addition to checking the url query string, I also like to use the dispatcher to check whether the user is currently logged in or not to decide which controller is loaded. For example if they are logged in and request the login page, the dispatcher would load their account instead. But is this a fairly non-standard design? Would it violate MVC in any way? I only ask as the examples I've read through this weekend have had no major calculations performed before the dispatcher routine, and commonly check whether the user is logged in or not per controller, and then redirect where necessary. But to me it seems odd to redirect a logged in user from the login area to account area if you could just load the account controller in the first place? I hope I've explained my consternation well enough, but could anyone offer some details on how they handle logged in users, and similar session data?

    Read the article

  • regexp target last main li in list

    - by veilig
    I need to target the starting tag of the last top level LI in a list that may or may-not contain sublists in various positions - without using CSS or Javascript. Is there a simple/elegant regexp that can help with this? I'm no guru w/ them, but it appears the need for greedy/non-greedy selectors when I'm selecting all the middle text (.*) / (.+) changes as nested lists are added and moved around in the list - and this is throwing me off. $pattern = '/^(<ul>.*)<li>(.+<\/li><\/ul>)$/'; $replacement = '$1<li id="lastLi">$3'; Perhaps there is an easier approach?? converting to XML to target the LI and then convert back? ie: Single Element <ul> <li>TARGET</li> </ul> Multiple Elements <ul> <li>foo</li> <li>TARGET</li> </ul> Nested Lists before end <ul> <li> foo <ul> <li>bar</li> </ul> <li> <li>TARGET</li> </ul> Nested List at end <ul> <li>foo</li> <li> TARGET <ul> <li>bar</li> </ul> </li> </ul>

    Read the article

  • Turning a series of raw images into movie frames in Android

    - by Nicholas Killewald
    I've got an Android project I'm working on that, ultimately, will require me to create a movie file out of a series of still images taken with a phone's camera. That is to say, I want to be able to take raw image frames and string them together, one by one, into a movie. Audio is not a concern at this stage. Looking over the Android API, it looks like there are calls in it to create movie files, but it seems those are entirely geared around making a live recording from the camera on an immediate basis. While nice, I can't use that for my purposes, as I need to put annotations and other post-production things on the images as they come in before they get fed into a movie (plus, the images come way too slowly to do a live recording). Worse, looking over the Android source, it looks like a non-trivial task to rewire that to do what I want it to do (at least without touching the NDK). Is there any way I can use the API to do something like this? Or alternatively, what would be the best way to go about this, if it's even feasible on cell phone hardware (which seems to keep getting more and more powerful, strangely...)?

    Read the article

  • GPG error occurs while using "deb file:/local-path-to-repo ..." in /etc/apt/sources.list

    - by Chandler.Huang
    I need to install packages within non-internet connection environment. My plan is to download dist structure from Internet and then add file path to /etc/apt/sources.list. So I download related structure includes ubunt/dists/precise, precise-backports, precise-proposed, precise-security, precise-updates from a ftp mirror server. And then I remove original source and add the following to my /etc/apt/sources.list. deb file:path-to-local-ubuntu-directory/ precise main restricted multiverse universe deb-src file:path-to-local-ubuntu-directory/ precise main restricted multiverse universe Then I got GPG error as following after apt-get update. root@openstack:/~# apt-get update Ign file: precise InRelease Get:1 file: precise Release.gpg [198 B] Get:2 file: precise Release [50.1 kB] Ign file: precise Release Get:3 file: precise/main TranslationIndex [3,761 B] Get:4 file: precise/multiverse TranslationIndex [2,716 B] Get:5 file: precise/restricted TranslationIndex [2,636 B] Get:6 file: precise/universe TranslationIndex [2,965 B] Reading package lists... Done W: GPG error: file: precise Release: The following signatures were invalid: BADSIG 0976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> I had tried use the following steps after google but in vain. sudo apt-get clean cd /var/lib/apt sudo mv lists lists.old sudo mkdir -p lists/partial sudo apt-get update Is there any way to resolve this? And why this error occurs? Thanks a lot.

    Read the article

  • No "redefinition of default parameter error" for class template member function?

    - by STingRaySC
    Why does the following give no compilation error?: // T.h template<class T> class X { public: void foo(int a = 42); }; // Main.cpp #include "T.h" #include <iostream> template<class T> void X<T>::foo(int a = 13) { std::cout << a << std::endl; } int main() { X<int> x; x.foo(); // prints 42 } It seems as though the 13 is just silently ignored by the compiler. Why is this? The cooky thing is that if the template declaration is in Main.cpp instead of a header file, I do indeed get the default parameter redefinition error. Now I know the compiler will complain about this if it were just an ordinary (non-template) function. What does the standard have to say about default parameters in class template member functions or function templates?

    Read the article

  • Resumable Upload in Ruby on Rails

    - by user253011
    Hi, I have been searching for a way for resumable file upload in RoR. In conclusion, I found out other than Java Applet no client-side-and-cross-platform agent can access the file system in such a way that to request the file from the position where the upload got terminated (due to any reason) with some exceptions like http://github.com/taf2/resume-up/tree/master (built in native Ruby, but requires google gears which is not "reliable" yet when it comes to cross platform almost same story as of ActiveX!) Since the only reliable option left is java applet, is there any good tutorial/forum/documentation for those paid java applets, such as "thin slice upload" etc. to make it work with rails application. I have found one http : // github . com / dassi / mediaclue , its a non-multi-ligual-German-Application in which they used jumploader. But in that application, I am unable to see resumable functionality. Scratching my head against their documentation, i found out http : // jumploader.com / doc_resume.html It tells that jumploader has resume functionality in Cross session resume, the one i am looking for (if the user close the browser the new session gets hold on uncompleted uploaded files from the old session against the user id). But I cant find any example on their demos page which actually pause/RESUME functionality in a continuous manner! Is it even possible to achieve that kind of resumable functionality. Please tell me about any options/example/demos preferable deployed in rails. I shall be very much obliged. ~ Thanks

    Read the article

  • Getting a gestureoverlayview

    - by Codejoy
    I have been using some nice tutorials on drawing graphics on my android. I wanted to also add in the cool gesture demo found here: http://developer.android.com/resources/articles/gestures.html That takes these lines of code: GestureOverlayView gestures = (GestureOverlayView) findViewById(R.id.gestures); gestures.addOnGesturePerformedListener(this); This is fine and dandy yet I realize in my demo i'm trying to build using code from "Playing with Graphics in Android". The demos make sense, everything makes sense but I found out by using: setContentView(new Panel(this)); as is required by the Playing With Graphics tutorials, then the findViewById seems to no longer be valid and returns null. At first I was about to post a stupider question as to why this is happening, a quick test of playing with the setContentView made me realize the cause of findViewById returning null, I just do not know how to remedy this issue. Whats the key I am missing here? I realize that the new Panel is doinking some reference up but I am not sure how to make the connection here. The: R.id.gestures is defined right int he main.xml as: (just like the tutorial) <android.gesture.GestureOverlayView android:id="@+id/gestures" android:layout_width="fill_parent" android:layout_height="0dip" android:layout_weight="1.0" /> So I did confirm the setContentView(new Panel(this)) is causing the issue. So I know the issue is that I have to figure out how to add the android.gesture.GestureOverlayView to the panel class somehow, I am just not sure how to go about this. After fighting with this I generally know what I need to do just now how to do it. I think I need either the equivalent of creating a panel in that main.xml OR figuring out how to build whats in main.xml for the gestures in code. I am close because I did this: GestureOverlayView gestures = new GestureOverlayView(this); which gets me a non null gestures now, unfortunately since I am not telling it to fill Parent anywhere I don't think its really showing up, so I am trying hard to figure out layout pa rams. Am I even on the right track?

    Read the article

  • Avoiding repeated subqueries when 'WITH' is unavailable

    - by EloquentGeek
    MySQL v5.0.58. Tables, with foreign key constraints etc and other non-relevant details omitted for brevity: CREATE TABLE `fixture` ( `id` int(11) NOT NULL auto_increment, `competition_id` int(11) NOT NULL, `name` varchar(50) NOT NULL, `scheduled` datetime default NULL, `played` datetime default NULL, PRIMARY KEY (`id`) ); CREATE TABLE `result` ( `id` int(11) NOT NULL auto_increment, `fixture_id` int(11) NOT NULL, `team_id` int(11) NOT NULL, `score` int(11) NOT NULL, `place` int(11) NOT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `team` ( `id` int(11) NOT NULL auto_increment, `name` varchar(50) NOT NULL, PRIMARY KEY (`id`) ); Where: A draw will set result.place to 0 result.place will otherwise contain an integer representing first place, second place, and so on The task is to return a string describing the most recently played result in a given competition for a given team. The format should be "def Team X,Team Y" if the given team was victorious, "lost to Team X" if the given team lost, and "drew with Team X" if there was a draw. And yes, in theory there could be more than two teams per fixture (though 1 v 1 will be the most common case). This works, but feels really inefficient: SELECT CONCAT( (SELECT CASE `result`.`place` WHEN 0 THEN "drew with" WHEN 1 THEN "def" ELSE "lost to" END FROM `result` WHERE `result`.`fixture_id` = (SELECT `fixture`.`id` FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` WHERE `fixture`.`competition_id` = 2 AND `result`.`team_id` = 1 ORDER BY `fixture`.`played` DESC LIMIT 1) AND `result`.`team_id` = 1), ' ', (SELECT GROUP_CONCAT(`team`.`name`) FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` LEFT JOIN `team` ON `result`.`team_id` = `team`.`id` WHERE `fixture`.`id` = (SELECT `fixture`.`id` FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` WHERE `fixture`.`competition_id` = 2 AND `result`.`team_id` = 1 ORDER BY `fixture`.`played` DESC LIMIT 1) AND `team`.`id` != 1) ) Have I missed something really obvious, or should I simply not try to do this in one query? Or does the current difficulty reflect a poor table design?

    Read the article

  • Can knowing C actually hurt the code you write in higher level languages?

    - by Jurily
    The question seems settled, beaten to death even. Smart people have said smart things on the subject. To be a really good programmer, you need to know C. Or do you? I was enlightened twice this week. The first one made me realize that my assumptions don't go further than my knowledge behind them, and given the complexity of software running on my machine, that's almost non-existent. But what really drove it home was this Slashdot comment: The end result is that I notice the many naive ways in which traditional C "bare metal" programmers assume that higher level languages are implemented. They make bad "optimization" decisions in projects they influence, because they have no idea how a compiler works or how different a good runtime system may be from the naive macro-assembler model they understand. Then it hit me: C is just one more abstraction, like all others. Even the CPU itself is only an abstraction! I've just never seen it break, because I don't have the tools to measure it. I'm confused. Has my mind been mutilated beyond recovery, like Dijkstra said about BASIC? Am I living in a constant state of premature optimization? Is there hope for me, now that I realized I know nothing about anything? Is there anything to know, even? And why is it so fascinating, that everything I've written in the last five years might have been fundamentally wrong? To sum it up: is there any value in knowing more than the API docs tell me? EDIT: Made CW. Of course this also means now you must post examples of the interpreter/runtime optimizing better than we do :)

    Read the article

  • Does running IIS7 in classic mode affect MVC output caching?

    - by Bob
    I have a need to run an application in classic mode for backwards compatibility with a specific application, and am trying to understand what kind of impact that will have on the performance of an MVC application that is running on the site. If we put a few static file maps (for .js, .css, .png, etc) above the ASP.NET wildcard map to reduce the amount of processing by the ASP.NET handler, will we be approaching the integrated mode in terms of performance? The thing i'm primarily concerned with is any effect this might have on output caching. I understand that integrated mode might (?) allow for the output cache to handle non ASP.NET content, but that isn't really a concern. We're more interested in ensuring that the MVC application has full use of the output cache. Empirically i've found that the two configurations operate on par when things go well, but if the page references resources that are not available, the integrated mode tends to fail much more quickly than the classic mode (e.g. 500 ms vs 10 seconds), reducing 'hang time' on the page load. Thanks for any feedback.

    Read the article

  • SQL Server 2008 - Full Text Query

    - by user208662
    Hello, I have two tables in a SQL Server 2008 database in my company. The first table represents the products that my company sells. The second table contains the product manufacturer’s details. These tables are defined as follows: Product ------- ID Name ManufacturerID Description Manufacturer ------------ ID Name As you can imagine, I want to make this as easy as possible for our customers to query this data. However, I’m having problems writing a forgiving, yet powerful search query. For instance, I’m anticipating people to search based on phonetical spellings. Because of this, the data may not match the exact data in my database. In addition, I think some individuals will search by manufacturer’s name first, but I want the matching product names to appear first. Based on these requirements, I’m currently working on the following query: select p.Name as 'ProductName', m.Name as 'Manufacturer', r.Rank as 'Rank' from Product p inner join Manufacturer m on p.ManufacturerID=m.ID inner join CONTAINSTABLE(Product, Name, @searchQuery) as r Oddly, this query is throwing an error. However, I have no idea why. Squiggles appear to the right of the last parenthesis in management studio. The tool tip says "An expression of non-boolean type specified in a context where a condition is expected". I understand what this statement means. However, I guess I do not know how COntainsTable works. What am I doing wrong? Thank you

    Read the article

  • Dictionaries with more than one key per value in Python

    - by nickname
    I am attempting to create a nice interface to access a data set where each value has several possible keys. For example, suppose that I have both a number and a name for each value in the data set. I want to be able to access each value using either the number OR the name. I have considered several possible implementations: Using two separate dictionaries, one for the data values organized by number, and one for the data values organized by name. Simply assigning two keys to the same value in a dictionary. Creating dictionaries mapping each name to the corresponding number, and vice versa Attempting to create a hash function that maps each name to a number, etc. (related to the above) Creating an object to encapsulate all three pieces of data, then using one key to map dictionary keys to the objects and simply searching the dictionary to map the other key to the object. None of these seem ideal. The first seems ugly and unmaintainable. The second also seems fragile. The third/fourth seem plausible, but seem to require either much manual specification or an overly complex implementation. Finally, the fifth loses constant-time performance for one of the lookups. In C/C++, I believe that I would use pointers to reference the same piece of data from different keys. I know that the problem is rather similar to a database lookup problem by a non-key column, however, I would like (if possible), to maintain the approximate O(1) performance of Python dictionaries. What is the most Pythonic way to achieve this data structure?

    Read the article

  • Visual SourceSafe (VSS): "Access to file (filename) denied" error

    - by tk-421
    Hi, can anybody help with the above SourceSafe error? I've spent hours trying to find a fix. I've also Googled the heck out of it but couldn't find a scenario matching mine, because in my case only a few files (not all) are affected. Here's what I found: only a few files in my project generate this error other files in the same directory (for example, App_Code has one of the problem files) work fine I've tried checking out from both the VSS client and Visual Studio another developer can check out the main problem file without any problems This sounds like a permission issue for my user, right? However: I found the location of one of the problem files in VSS's data directory (using VSS's naming format, as in 'fddaaaaa.a') and checked its permissions; everything looks fine and its permissions match those of other files I can check out successfully I can see no differences in the file properties between working and non-working files What else can I check? Has anyone encountered this problem before and found a solution? Thanks. P.S.: SourceGear, svn or git are not options, unfortunately. P.P.S.: Tried unsuccessfully to add tag "sourcesafe." EDIT: Hey Paddy, I tried to click 'add comment' to respond to your comment, but I'm getting a javascript error when loading this page in IE8 ("jquery undefined," etc.) so this isn't working. This is when checking out files, and yes, I've obliterated my local copy more times than I can remember. ;) EDIT 2: Thanks for the responses, guys (again I can't 'add comment' due to jQuery not loading, maybe blocked as discussed in Meta). If the problem was caused by antivirus or a bad disk, would other users still be able to check out the file(s)? That's the case here, which makes me think it's a permission issue specific to my account. However I've looked at the permissions and they match both other users' settings and settings on other files which I can check out.

    Read the article

  • Enforce link in Team foundation server bug work item for duplicates

    - by Tewr
    We have just started out with Team Foundation Server 2008 / Visual Studio Team System and we are pleased to find how we can export and modify work items to our needs. However, this last thing that would make the setup perfect for us has proved somewhat difficult: We have exported the Bug work item type and have made modifications to it to appear differently to different groups of users. We do, however, see a potential problem in non-developers reporting bugs which turn out to be duplicates. We would like to enforce that users who close a ticket with resolved reason:duplicate also creates a link to the bug which is perceived as the first bug report. I have looked at System.RelatedLinkCount, and put the rule <FIELD type="Integer" name="RelatedLinkCount" refname="System.RelatedLinkCount"> <WHEN field="Microsoft.VSTS.Common.ResolvedReason" value="duplicate"> <PROHIBITEDVALUES> <LISTITEM value="0" /> </PROHIBITEDVALUES> </WHEN> </FIELD> However, when I try to put anything in that scope, the importer tells me that System.RelatedLinkCount does not accept the rule, no matter what I put, but the rule above shows what I am trying to do (even though the most preferable rule would also check that the bug that I link to is not a duplicate as well, though this is overkill :P) Has anyone else tried to enforce rules like this in work items? Is there another approach to solving the same issue? I am thankful for any thoughts on the matter.

    Read the article

  • Python file iterator over a binary file with newer idiom.

    - by drewk
    In Python, for a binary file, I can write this: buf_size=1024*64 # this is an important size... with open(file, "rb") as f: while True: data=f.read(buf_size) if not data: break # deal with the data.... With a text file that I want to read line-by-line, I can write this: with open(file, "r") as file: for line in file: # deal with each line.... Which is shorthand for: with open(file, "r") as file: for line in iter(file.readline, ""): # deal with each line.... This idiom is documented in PEP 234 but I have failed to locate a similar idiom for binary files. I have tried this: >>> with open('dups.txt','rb') as f: ... for chunk in iter(f.read,''): ... i+=1 >>> i 1 # 30 MB file, i==1 means read in one go... I tried putting iter(f.read(buf_size),'') but that is a syntax error because of the parens after the callable in iter(). I know I could write a function, but is there way with the default idiom of for chunk in file: where I can use a buffer size versus a line oriented? Thanks for putting up with the Python newbie trying to write his first non-trivial and idiomatic Python script.

    Read the article

< Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >