Search Results

Search found 2714 results on 109 pages for 'extremely frustrated'.

Page 89/109 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • MSMQ - Message Queue Abstraction and Pattern

    - by Maxim Gershkovich
    Hi All, Let me define the problem first and why a messagequeue has been chosen. I have a datalayer that will be transactional and EXTREMELY insert heavy and rather then attempt to deal with these issues when they occur I am hoping to implement my application from the ground up with this in mind. I have decided to tackle this problem by using the Microsoft Message Queue and perform inserts as time permits asynchronously. However I quickly ran into a problem. Certain inserts that I perform may need to be recalled (ie: retrieved) immediately (imagine this is for POS system and what happens if you need to recall the last transaction - one that still hasn’t been inserted). The way I decided to tackle this problem is by abstracting the MessageQueue and combining it in my data access layer thereby creating the illusion of a single set of data being returned to the user of the datalayer (I have considered the other issues that occur in such a scenario (ie: essentially dirty reads and such) and have concluded for my purposes I can control these issues). However this is where things get a little nasty... I’ve worked out how to get the messages back and such (trivial enough problem) but where I am stuck is; how do I create a generic (or at least somewhat generic) way of querying my message queue? One where I can minimize the duplication between the SQL queries and MessageQueue queries. I have considered using LINQ (but have very limited understanding of the technology) and have also attempted an implementation with Predicates which so far is pretty smelly. Are there any patterns for such a problem that I can utilize? Am I going about this the wrong way? Does anyone have an of their own ideas about how I can tackle this problem? Does anyone even understand what I am talking about? :-) Any and ALL input would be highly appreciated and seriously considered… Thanks again.

    Read the article

  • Efficiency of manually written loops vs operator overloads (C++)

    - by Sagekilla
    Hi all, in the program I'm working on I have 3-element arrays, which I use as mathematical vectors for all intents and purposes. Through the course of writing my code, I was tempted to just roll my own Vector class with simple +, -, *, /, etc overloads so I can simplify statements like: for (int i = 0; i < 3; i++) r[i] = r1[i] - r2[i]; // becomes: r = r1 - r2; Which should be more or less identical in generated code. But when it comes to more complicated things, could this really impact my performance heavily? One example that I have in my code is this: Manually written version: for (int j = 0; j < 3; j++) { p.vel[j] = p.oldVel[j] + (p.oldAcc[j] + p.acc[j]) * dt2 + (p.oldJerk[j] - p.jerk[j]) * dt12; p.pos[j] = p.oldPos[j] + (p.oldVel[j] + p.vel[j]) * dt2 + (p.oldAcc[j] - p.acc[j]) * dt12; } Using a Vector class with operator overloads: p.vel = p.oldVel + (p.oldAcc + p.acc) * dt2 + (p.oldJerk - p.jerk) * dt12; p.pos = p.oldPos + (p.oldVel + p.vel) * dt2 + (p.oldAcc - p.acc) * dt12; I am compiling my code for maximum possible speed, as it's extremely important that this code runs quickly and calculates accurately. So will me relying on my Vector's for these sorts of things really affect me? For those curious, this is part of some numerical integration code which is not trivial to run in my program. Any insight would be appreciated, as would any idioms or tricks I'm unaware of.

    Read the article

  • Dynamically cast a control type in runtime

    - by JayT
    Hello, I have an application whereby I dynamically create controls on a form from a database. This works well, but my problem is the following: private Type activeControlType; private void addControl(ContainerControl inputControl, string ControlName, string Namespace, string ControlDisplayText, DataRow drow, string cntrlName) { Assembly assem; Type myType = Type.GetType(ControlName + ", " + Namespace); assem = Assembly.GetAssembly(myType); Type controlType = assem.GetType(ControlName); object obj = Activator.CreateInstance(controlType); Control tb = (Control)obj; tb.Click += new EventHandler(Cntrl_Click); inputControl.Controls.Add(tb); activeControlType = controlType; } private void Cntrl_Click(object sender, EventArgs e) { string test = ((activeControlType)sender).Text; //Problem ??? } How do I dynamically cast the sender object to a class that I can reference the property fields of it. I have googled, and found myself trying everything I have come across..... Now I am extremely confused... and in need of some help Thnx JT

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • How do you clear your mind after 8-10 hours per day of coding?

    - by Bryan
    Related Question- Ways to prepare your mind before coding?. I'm having a hard time taking my mind off of work projects in my personal time. It's not that I have a stressful job or tight deadlines; I love my job. I find that after spending the whole day writing code & trying to solve problems, I have an extremely hard time getting it out of my mind. I'm constantly thinking about the current project/problem/task all the time. It's keeping me from relaxing, and in the long run it just builds stress. Personal projects help to some extent, but mostly just to distract me. I still have source code bouncing around my head 16 hours a day. I'm still relatively new to the workforce. Have you struggled with this, perhaps as a young developer? How did you overcome it? Can anyone offer general advice on winding down after a long programming session?

    Read the article

  • File.Move, why do i get a FileNotFoundException? The file exist...

    - by acidzombie24
    Its extremely weird since the program is iterating the file! outfolder and infolder are both in H:/ my external HD using windows 7. The idea is to move all folders that only contain files with the extention db and svn-base. When i try to move the folder i get an exception. VS2010 tells me it cant find the folder specified in dir. This code is iterating through dir so how can it not find it! this is weird. string []theExt = new string[] { "db", "svn-base" }; foreach (var dir in Directory.GetDirectories(infolder)) { bool hit = false; if (Directory.GetDirectories(dir).Count() > 0) continue; foreach (var f in Directory.GetFiles(dir)) { var ext = Path.GetExtension(f).Substring(1); if(theExt.Contains(ext) == false) { hit = true; break; } } if (!hit) { var dst = outfolder + "\\" + Path.GetFileName(dir); File.Move(dir, outfolder); //FileNotFoundException: Could not find file dir. } } }

    Read the article

  • App is fast on 3GS but slow on 3G

    - by Anthony Chan
    Hi all, I'm new to computer coding and have just finished coding an app and tested it on both 3G and 3GS. On 3GS, it worked as normal as on the simulator. However, when I tried to run it on 3G, the app becomes extremely slow. I'm not sure what's the reason and I hope someone could shed some light on me. Generally, my app has a couple of view controller classes, with one of them being the title page, one being the main page, one is setting, etc. I used a dissolve to transition from the title page to the main page. But even this simple transition shows un-smooth performance on a 3G! My other part of the app involves zooming in to some images by scaling up the images, switching images by push or dissolve upon receiving touch events, saving photos into photo library and storing and retrieving some photos in a folder and some data in a SQlite database, each showing un-smooth actions. Compared with some heavy graphic or heavy maths app, I think mine is pretty simple. I totally have no clue why the app would behave so slow and un-smooth that it is barely useful on a 3G. Any help/ direction would be much appreciated. Thanks for helping out.

    Read the article

  • In M-V-VM where does my code go?

    - by Nate Bross
    So, this is a pretty basic question I hope. I have a web service that I've added through Add Service Reference. It has some methods to get list and get detail of a perticular table in my database. What I'm trying to do is setup a UI as follows: App Load Load service proxy Call the GetList(); method display the results in a ListBox control User Double Clicks item in ListBox, display a modal dialog with a "detail" view I'm extremely new to using MVVM, so any help would be greatly appreciated. Additional information: // Service Interface (simplification): interface IService { IEnumerable<MyObject> GetList(); MyObject GetDetail(int id); } // Data object (simplification) class MyObject { public int ID { get; set; } public string Name { get; set; } } I'm thinking I should have something like this: MainWindow MyObjectViewUserControl Displays list Opens modal window on double click Specific Questions: What would my ViewModel class look like? Where does the code to handle the double click go? Inside the UserControl? Sorry for the long details, but I'm very new to the whole thing and I'm not educated enough to ask the right questions. I checked out the MVVM Sample from wpf.codeplex.com and something isn't quite clicking for me yet, because it seems very confusing.

    Read the article

  • Syncing two AS3 NetStreams

    - by Lowgain
    I'm writing an app that requires an audio stream to be recording while a backing track is played. I have this working, but there is an inconsistent gap in between playback and record starting. I don't know if I can do anything to make the sync perfect every time, so I've been trying to track what time each stream starts so I can calculate the delay and trim it server-side. This also has proved to be a challenge as no events seem to be sent when a connection starts (as far as I know). I've tried using various properties like the streams' buffer sizes, etc. I'm thinking now that as my recorded audio is only mono, I may be able to put some kind of 'control signal' on the second stereo track which I could use to determine exactly when a sound starts recording (or stick the whole backing track in that channel so I can sync them that way). This leaves me with the new problem of properly injecting this sound into the NetStream. If anyone has any idea whether or not any of these ideas will work, how to execute them, or some alternatives, that would be extremely helpful! Been working on this issue for awhile

    Read the article

  • Accents in uploaded file being replaced with '?'

    - by Katfish
    I am building a data import tool for the admin section of a website I am working on. The data is in both French and English, and contains many accented characters. Whenever I attempt to upload a file, parse the data, and store it in my MySQL database, the accents are replaced with '?'. I have text files containing data (charset is iso-8859-1) which I upload to my server using CodeIgniter's file upload library. I then read the file in PHP. My code is similar to this: $this->upload->do_upload() $data = array('upload_data' => $this->upload->data()); $fileHandle = fopen($data['upload_data']['full_path'], "r"); while (($line = fgets($fileHandle)) !== false) { echo $line; } This produces lines with accents replaced with '?'. Everything else is correct. If I download my uploaded file from my server over FTP, the charset is still iso-8850-1, but a diff reveals that the file has changed. However, if I open the file in TextEdit, it displays properly. I attempted to use PHP's stream_encoding method to explicitly set my file stream to iso-8859-1, but my build of PHP does not have the method. After running out of ideas, I tried wrapping my strings in both utf8_encode and utf8_decode. Neither worked. If anyone has any suggestions about things I could try, I would be extremely grateful.

    Read the article

  • Testing When Correctness is Poorly Defined?

    - by dsimcha
    I generally try to use unit tests for any code that has easily defined correct behavior given some reasonably small, well-defined set of inputs. This works quite well for catching bugs, and I do it all the time in my personal library of generic functions. However, a lot of the code I write is data mining code that basically looks for significant patterns in large datasets. Correct behavior in this case is often not well defined and depends on a lot of different inputs in ways that are not easy for a human to predict (i.e. the math can't reasonably be done by hand, which is why I'm using a computer to solve the problem in the first place). These inputs can be very complex, to the point where coming up with a reasonable test case is near impossible. Identifying the edge cases that are worth testing is extremely difficult. Sometimes the algorithm isn't even deterministic. Usually, I do the best I can by using asserts for sanity checks and creating a small toy test case with a known pattern and informally seeing if the answer at least "looks reasonable", without it necessarily being objectively correct. Is there any better way to test these kinds of cases?

    Read the article

  • List of Big-O for PHP functions?

    - by Kendall Hopkins
    After using PHP for a while now, I've noticed that not all PHP built in functions as fast as expected. Consider the below two possible implementations of a function that finds if a number is prime using a cached array of primes. //very slow for large $prime_array $prime_array = array( 2, 3, 5, 7, 11, 13, .... 104729, ... ); $result_array = array(); foreach( $array_of_number => $number ) { $result_array[$number] = in_array( $number, $large_prime_array ); } //still decent performance for large $prime_array $prime_array => array( 2 => NULL, 3 => NULL, 5 => NULL, 7 => NULL, 11 => NULL, 13 => NULL, .... 104729 => NULL, ... ); foreach( $array_of_number => $number ) { $result_array[$number] = array_key_exists( $number, $large_prime_array ); } This is because in_array is implemented with a linear search O(n) which will linearly slow down as $prime_array grows. Where the array_key_exists function is implemented with a hash lookup O(1) which will not slow down unless the hash table gets extremely populated (in which case it's only O(logn)). So far I've had to discover the big-O's via trial and error, and occasionally looking at the source code. Now for the question... I was wondering if there was a list of the theoretical (or practical) big O times for all* the PHP built in functions. *or at least the interesting ones For example find it very hard to predict what the big O of functions listed because the possible implementation depends on unknown core data structures of PHP: array_merge, array_merge_recursive, array_reverse, array_intersect, array_combine, str_replace (with array inputs), etc.

    Read the article

  • WPF binding to a boolean on a control

    - by Jose
    I'm wondering if someone has a simple succinct solution to binding to a dependency property that needs to be the converse of the property. Here's an example I have a textbox that is disabled based on a property in the datacontext e.g.: <TextBox IsEnabled={Binding CanEdit} Text={Binding MyText}/> The requirement changes and I want to make it ReadOnly instead of disabled, so without changing my ViewModel I could do this: In the UserControl resources: <UserControl.Resources> <m:NotConverter x:Key="NotConverter"/> </UserControl.Resources> And then change the TextBox to: <TextBox IsReadOnly={Binding CanEdit,Converter={StaticResource NotConverter}} Text={Binding MyText}/> Which I personally think is EXTREMELY verbose I would love to be able to just do this(notice the !): <TextBox IsReadOnly={Binding !CanEdit} Text={Binding MyText}/> But alas, that is not an option that I know of. I can think of two options. Create an attached property IsNotReadOnly to FrameworkElement(?) and bind to that property If I change my ViewModel then I could add a property CanEdit and another CannotEdit which I would be kind of embarrassed of because I believe it adds an irrelevant property to a class, which I don't think is a good practice. The main reason for the question is that in my project the above isn't just for one control, so trying to keep my project as DRY as possible and readable I am throwing this out to anyone feeling my pain and has come up with a solution :)

    Read the article

  • Most watched videos this week

    - by Jan Hancic
    I have a youtube like web-page where users upload&watch videos. I would like to add a "most watched videos this week" list of videos to my page. But this list should not contain just the videos that ware uploaded in the previous week, but all videos. I'm currently recording views in a column, so I have no information on when a video was watched. So now I'm searching for a solution to how to record this data. The first is the most obvious (and the correct one, as far as I know): have a separate table in which you insert a new line every time you want to record a new view (storing the ID of the video and the timestamp). I'm worried that I would quickly get huge amounts of data in this table, and queries using this table would be extremely slow (we get about 3 million views a month). The second solution isn't as flexible but is more easy on the database. I would add 7 columns to the "videos" table (one for each day of the week): views_monday, views_tuesday , views_wednesday, ... And increment the value in the correct column based on the day it is. And I would reset the current day's column to 0 at midnight. I could then easily get the most watched videos of the week by summing this 7 columns. What do you think, should I bother with the first solution or will the second one suffice for my case? If you have a better solution please share! Oh, I'm using MySQL.

    Read the article

  • Link compatibility between C++ and D

    - by Caspin
    D easily interfaces with C. D just as easily interfaces with C++, but (and it's a big but) the C++ needs to be extremely trivial. The code cannot use: namespaces templates multiple inheritance mix virtual with non-virtual methods more? I completely understand the inheritance restriction. The rest however, feel like artificial limitations. Now I don't want to be able to use std::vector<T> directly, but I would really like to be able to link with std::vector<int> as an externed template. The C++ interfacing page has this particularly depressing comment. D templates have little in common with C++ templates, and it is very unlikely that any sort of reasonable method could be found to express C++ templates in a link-compatible way with D. This means that the C++ STL, and C++ Boost, likely will never be accessible from D. Admittedly I'll probably never need std::vector while coding in D, but I'd love to use QT or boost. So what's the deal. Why is it so hard to express non-trivial C++ classes in D? Would it not be worth it to add some special annotations or something to express at least namespaces?

    Read the article

  • PyGTK: Manually render an existing widget at a given Rectangle? (TextView in a custom CellRenderer)

    - by NicDumZ
    Hello! I am trying to draw a TextView into the cell of a TreeView. (Why? I would like to have custom tags on text, so they can be clickable and trigger different actions/popup menus depending on where user clicks). I have been trying to write a customized CellRenderer for this purpose, but so far I failed because I find it extremely difficult to find generic documentation on rendering design in gtk. More than an answer at the specific question (that might be hard/not doable, and I'm not expecting you to do everything for me), I am first looking for documentation on how a widget is rendered, to understand how one is supposed to implement a CellRenderer. Can you share any link that explains, either for gtk or for pygtk, the rendering mechanism? More specifically: size allocation mechanism (should I say protocol?). I understand that a window has a defined size, and then queries its children, saying "my size is w x h, what would be your ideal size, buddy?", and then sometimes shrinks children when all children cant fit together at their ideal sizes. Any specific documentation on that, and on particular on when this happens during rendering? How are rendered "builtin" widgets? What kind of methods do they call on Widget base class? On the parent Window? When? Do they use pango.Layout? can you manually draw a TextView onto a pango.Layout object? This link gives an interesting example showing how you can draw content in a pango.Layout object and use it in a CellRenderer. I guess that I could adapt it if only I understood how TextView widget are rendered. Or perhaps, to put it more simply: given an existing widget instance, how does one render it at a specific gdk.Rectangle? Thanks a lot.

    Read the article

  • What are the original reasons for ToString() in Java and .NET?

    - by d.
    I've used ToString() modestly in the past and I've found it very useful in many circumstances. However, my usage of this method would hardly justify to put this method in none other than System.Object. My wild guess is that, at some point during the work carried out and meetings held to come up with the initial design of the .NET framework, it was decided that it was necessary - or at least extremely useful - to include a ToString() method that would be implemented by everything in the .NET framework. Does anyone know what the exact reasons were? Am I missing a ton of situations where ToString() proves useful enough as to be part of System.Object? What were the original reasons for ToString()? Thanks a lot! PS - Again: I'm not questioning the method or implying that it's not useful, I'm just curious to know what makes it SO useful as to be placed in System.Object. Side note - Imagine this: AnyDotNetNativeClass someInitialObject = new AnyDotNetNativeClass([some constructor parameters]); AnyDotNetNativeClass initialObjectFullCopy = AnyDotNetNativeClass.FromString(someInitialObject.ToString()); Wouldn't this be cool? EDIT(1): (A) - Based on some answers, it seems that .NET languages inherited this from Java. So, I'm adding "Java" to the subject and to the tags as well. If someone knows the reasons why this was implemented in Java then please shed some light! (B) - Static hypothetical FromString vs Serialization: sure, but that's quite a different story, right?

    Read the article

  • PHP Check slave status without mysql_connect timeout issues

    - by Jonathon
    I have a web-app that has a master mysql db and four slave dbs. I want to handle all (or almost all) read-only (SELECT) queries from the slaves. Our load-balancer sends the user to one of the slave machines automatically, since they are also running Apache/PHP and serving webpages. I am using an include file to setup the connection to the databases, such as: //for master server (i.e. - UPDATE/INSERT/DELETE statements) $Host = "10.0.0.x"; $User = "xx"; $Password = "xx"; $Link = mysql_connect( $Host, $User, $Password ); if( !$Link ) ) { die( "Master database is currently unavailable. Please try again later." ); } //this connection can be used for READ-ONLY (i.e. - SELECT statements) on the localhost $Host_Local = "localhost"; $User_Local = "xx"; $Password_Local = "xx"; $Link_Local = mysql_connect( $Host_Local, $User_Local, $Password_Local ); //fail back to master if slave db is down if( !$Link_Local ) ) { $Link_Local = mysql_connect( $Host, $User, $Password ); } I then use $Link for all update queries and $Link_Local as the connection for SELECT statements. Everything works fine until the slave server database goes down. If the local db is down, the $Link_Local = mysql_connect() call takes at least 30 seconds before it gives up on trying to connect to the localhost and returns back to the script. This causes a huge backlog of page serves and basically shuts down the system (due to the extremely slow response time). Does anyone know of a better way to handle connections to slave servers via PHP? Or, is there some kind of timeout function that could be used to stop the mysql_connect call after 2-3 seconds? Thanks for the help. I searched the other mysql_connect threads, but didn't see any that addressed this issue.

    Read the article

  • AWK: compare apache dates without using regular expression

    - by smallmeans
    I'm writing a loganalysis application and wanted to grab apache log records between two certain dates. Assume that a date is formated as such: 22/Dec/2009:00:19 (day/month/year:hour:minute) Currently, I'm using a regular expression to replace the month name with its numeric value, remove the separators, so the above date is converted to: 221220090019 making a date comparison trivial.. but.. Running a regex on each record for large files, say, one containing a quarter million records, is extremely costly.. is there any other method not involving regex substitution? Thanks in advance Edit: here's the function doing the convertion/comparison function dateInRange(t, from, to) { sub(/[[]/, "", t); split(t, a, "[/:]"); match("JanFebMarAprMayJunJulAugSepOctNovDec", a[2]); a[2] = sprintf("%02d", (RSTART + 2) / 3); s = a[3] a[2] a[1] a[4] a[5]; return s >= from && s <= to; } "from" and "to" are the intervals in the aforementioned format, and "t" is the raw apache log date/time field (e.g [22/Dec/2009:00:19:36)

    Read the article

  • Python Interactive Interpreter always returns "Invalid syntax" on Windows

    - by user559217
    I've encountered an extremely confusing problem. Whatever I type into the Python interpreter returns "Invalid Syntax". See examples below. I've tried fooling around with the code page of the prompt I run the interpreter from, but it doesn't seem to help at all. Furthermore, I haven't been able to find this particular, weird bug elsewhere online. Any assistance anyone could provide would be lovely. I've already tried reinstalling Python, but I didn't have any luck - the problem is also there in both 3.13 and 2.7. Running: Python version 3.1.3, Windows XP SP3. Getting: C:\Program Files\Python31>.\python Python 3.1.3 (r313:86834, Nov 27 2010, 18:30:53) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> 2+2 File "<stdin>", line 1 2+2 ^ SyntaxError: invalid syntax >>> x = "Oh, fiddlesticks." File "<stdin>", line 1 x = "Oh, fiddlesticks." ^ SyntaxError: invalid syntax

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • IIS to SQL Server kerberos auth issues

    - by crosan
    We have a 3rd party product that allows some of our users to manipulate data in a database (on what we'll call SvrSQL) via a website on a separate server (SvrWeb). On SvrWeb, we have a specific, non-default website setup for this application so instead of going to http://SvrWeb.company.com to get to the website we use http://application.company.com which resolves to SvrWeb and the host headers resolve to the correct website. There is also a specific application pool set up for this site which uses an Active Directory account identity we'll call "company\SrvWeb_iis". We're setup to allow delegation on this account and to allow it to impersonate another login which we want it to do. (we want this account to pass along the AD credentials of the person signed into the website to SQL Server instead of a service account. We also set up the SPNs for the SrvWeb_iis account via the following command: setspn -A HTTP/SrvWeb.company.com SrvWeb_iis The website pulls up, but the section of the website that makes the call to the database returns the message: Cannot execute database query. Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. I thought we had the SPN information set up correctly, but when I check the security event log on SrvWeb I see entries of my logging in, but it seems to be using NTLM and not kerberos: Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Any ideas or articles that cover this setup in detail would be extremely appreciated! If it helps, we are using SQL Server 2005, and both the web and SQL servers are Windows 2003.

    Read the article

  • PHP's fopen is terminally failing

    - by Skittles
    Okay, I have GOT to be missing something totally rudimentary here. I have an extremely simple use of PHP's fopen function, but for some reason, it will not open the file no matter what I do. The odd part about this is that I use fopen in another function in the same script and it's working perfectly. I'm using the fclose in both functions. So, I know it's not a matter of a rogue file handle. I have confirmed the file's path and the existence of the target file also. I'm running the script at the command-line as root, so I know it's not apache that's the cause. And since I am running the script as root, I am fairly confident that permissions are not the issue. So, what on earth am I missing here? function get_file_list() { $file = '/home/site/tmp/return_files_list.txt'; $fp = fopen($file, 'r') or die("Could not open file: /home/site/tmp/return_files_list.txt for reading.\n"); $files_list = array(); while($line = fgets($fp)) { $files_list[] = $line; } fclose($fp); return $files_list; } function num_records_in_file($filename) { $fp = fopen( $filename, 'r' ); # or die("Could not open file: $filename\n"); $counter = 0; if ($fp) { while (!feof( $fp )) { $line = fgets( $fp ); $arr = explode( '|', $line ); if (( ( $arr[0] != 'HDR' && $arr[0] != 'TRL' ) && $arr[0] != '' )) { ++$counter; continue; } } } fclose( $fp ); return $counter; } As requested, here's both functions. The second function is passed an absolute path to the file. That is what I used to confirm that the file is there and that the path is correct.

    Read the article

  • Bing search API and Azure

    - by Gapton
    I am trying to programatically perform a search on Microsoft Bing search engine. Here is my understanding: There was a Bing Search API 2.0 , which will be replaced soon (1st Aug 2012) The new API is known as Windows Azure Marketplace. You use different URL for the two. In the old API (Bing Search API 2.0), you specify a key (Application ID) in the URL, and such key will be used to authenticate the request. As long as you have the key as a parameter in the URL, you can obtain the results. In the new API (Windows Azure Marketplace), you do NOT include the key (Account Key) in the URL. Instead, you put in a query URL, then the server will ask for your credentials. When using a browser, there will be a pop-up asking for a/c name and password. Instruction was to leave the account name blank and insert your key in the password field. Okay, I have done all that and I can see a JSON-formatted results of my search on my browser page. How do I do this programmatically in PHP? I tried searching for the documentation and sample code from Microsoft MSDN library, but I was either searching in the wrong place, or there are extremely limited resources in there. Would anyone be able to tell me how do you do the "enter the key in the password field in the pop-up" part in PHP please? Thanks alot in advance.

    Read the article

  • Better language or checking tool?

    - by rwallace
    This is primarily aimed at programmers who use unmanaged languages like C and C++ in preference to managed languages, forgoing some forms of error checking to obtain benefits like the ability to work in extremely resource constrained systems or the last increment of performance, though I would also be interested in answers from those who use managed languages. Which of the following would be of most value? A language that would optionally compile to CLR byte code or to machine code via C, and would provide things like optional array bounds checking, more support for memory management in environments where you can't use garbage collection, and faster compile times than typical C++ projects. (Think e.g. Ada or Eiffel with Python syntax.) A tool that would take existing C code and perform static analysis to look for things like potential null pointer dereferences and array overflows. (Think e.g. an open source equivalent to Coverity.) Something else I haven't thought of. Or put another way, when you're using C family languages, is the top of your wish list more expressiveness, better error checking or something else? The reason I'm asking is that I have a design and prototype parser for #1, and an outline design for #2, and I'm wondering which would be the better use of resources to work on after my current project is up and running; but I think the answers may be useful for other tools programmers also. (As usual with questions of this nature, if the answer you would give is already there, please upvote it.)

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >