Search Results

Search found 5998 results on 240 pages for 'rise against'.

Page 121/240 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Spring Security DB Authentication w/Hibernate and hashed passwords?

    - by Seth
    I'm trying to set up spring security 3 to authenticate users against my hibernate 3 database. I'm storing only sha1 hashes of the passwords in the database (not plaintext). I've looked at this and this, which tell me to implement my own UserDetailsService. Unfortunately, the UserDetails that loadUserByUsername spits out seem to need the plaintext password, which I don't have. How is this usually handled? Can Spring Security actually do what I need here? Am I missing something?

    Read the article

  • PHP explode not filling in array spot 0

    - by Billy Winterhouse
    I have a file we will call info.txt under UNIX format that has only the following in it: #Dogs #Cats #Birds #Rabbits and am running this against it: $filename = "info.txt"; $fd = fopen ($filename, "r"); $contents = fread ($fd,filesize ($filename)); fclose ($fd); $delimiter = "#"; $insideContent = explode($delimiter, $contents); Now everything looks to be working fine except when I display the array I get the following. [0] => [1] => Dogs [2] => Cats [3] => Birds [4] => Rabbits I checked the .txt file to make sure there wasn't any space or hidden characters in front of the first # so I'm at a loss of why this is happening other than I feel like I'm missing something terribly simple. Any ideas? Thanks in advanced!

    Read the article

  • ASP.NET MVC and mixed mode authentication

    - by Alan T
    Hi All I have a scenario whereby I require users to be able to authenticate against an ASP.NET MVC web application using either Windows authentication or Forms authentication. If the user is on the internal network they will use Windows authentication and if they are connecting externally they will use Forms authentication. I’ve seen quite a few people asking the question how do I configure an ASP.NET MVC web application for this, but I haven’t found a complete explanation. Please can someone provide a detailed explanation, with code examples, on how this would be done? Thanks. Alan T

    Read the article

  • Comparison of the multiprocessing module and pyro?

    - by fivebells
    I use pyro for basic management of parallel jobs on a compute cluster. I just moved to a cluster where I will be responsible for using all the cores on each compute node. (On previous clusters, each core has been a separate node.) The python multiprocessing module seems like a good fit for this. I notice it can also be used for remote-process communication. If anyone has used both frameworks for remote-process communication, I'd be grateful to hear how they stack up against each other. The obvious benefit of the multiprocessing module is that it's built-in from 2.6. Apart from that, it's hard for me to tell which is better.

    Read the article

  • Overwriting the content from one MOSS content database to another

    - by 78lro
    We have a content database on our live moss server. It contains one site collection with several sub-sites. I'm using the stsadm export command to produce a cmp file, then moving this to our test server in a different farm. I then want to import this content into the content database on our test farm, using the import stsadm command results in me being left with all the existing test data as well as the live data. I tried detaching the existing content database from test in central admin and creating a new empty one,to the then run the import against that but the import failed as obviously there's not root site in the empty db. The aim is to have the data on test look like live, clearing out all the test data. Can anyone suggest a good approach to this type of problem?

    Read the article

  • FormsAuthentication.SignOut() does not log the user out.

    - by Jason
    Smashed my head against this a bit too long. How do I prevent a user from browsing a site's pages after they have been logged out using FormsAuthentication.SignOut? I would expect this to do it: FormsAuthentication.SignOut(); Session.Abandon(); FormsAuthentication.RedirectToLoginPage(); But it doesn't. If I type in a URL directly, I can still browse to the page. I haven't used roll-your-own security in a while so I forget why this doesn't work.

    Read the article

  • SharpSVN and C# Problem

    - by Sam F
    When trying to add SharpSVN to my C# project, compiling with SharpSVN related calls gives me this error: FileLoadException was Unhandled Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information. What I did was add the References from the downloaded SharpSVN zip file and added the using SharpSvn; When I compile that it works fine, but when I add: string targetPath = "https://bobl/svn/ConsoleApplication1"; SvnTarget target; SvnTarget.TryParse(targetPath, out target); It breaks with that error. I've searched this error and have had no luck in finding a solution.

    Read the article

  • Hibernate Schema Validation Fails on Oracle Table Synonyms

    - by Rob
    I'm developing a Java web application that uses Hibernate (annotations-based) for persisting entities to an Oracle 11g database. The DBA created synonyms for the tables and requested that I use these synonyms instead of the physical tables. (Eg: Table "Foo" has synonym "S_Foo") If I have "hibernate.hbm2ddl.auto=validate" enabled, then the application fails on startup with "Missing Table: S_Foo". If I turn off the validation, then the app starts up fine and works properly. My guess is that Hibernate only checks against physical tables and not synonyms when validating that a table exists. Is there any way to enable Hibernate schema validation with synonyms? Can I specify both a physical table and a synonym in the annotation? I prefer having that extra safety check that the table structure is correct when the application starts up.

    Read the article

  • UIImagePickerController and extracting EXIF data from existing photos

    - by tomtaylor
    It's well known that UIImagePickerController doesn't return the metadata of the photo after selection. However, a couple of apps in the app store (Mobile Fotos, PixelPipe) seem to be able to read the original files and the EXIF data stored within them, enabling the app to extract the geodata from the selected photo. They seem to do this by reading the original file from the /private/var/mobile/Media/DCIM/100APPLE/ folder and running it through an EXIF library. However, I can't work out a way of matching a photo returned from the UIImagePickerController to a file on disk. I've explored file sizes, but the original file is a JPEG, whilst the returned image is a raw UIImage, making it impossible to know the file size of the image that was selected. I'm considering making a table of hashes and matching against the first x pixels of each image. This seems a bit over the top though, and probably quite slow. Any suggestions?

    Read the article

  • Is That REST API Really RPC? Roy Fielding Seems to Think So.

    - by Rich Apodaca
    A large amount of what I thought I knew about REST is apparently wrong - and I'm not alone. This question has a long lead-in, but it seems to be necessary because the information is a bit scattered. The actual question comes at the end if you're already familiar with this topic. From the first paragraph of Roy Fielding's REST APIs must be hypertext-driven, it's pretty clear he believes his work is being widely misinterpreted: I am getting frustrated by the number of people calling any HTTP-based interface a REST API. Today’s example is the SocialSite REST API. That is RPC. It screams RPC. There is so much coupling on display that it should be given an X rating. Fielding goes on to list several attributes of a REST API. Some of them seem to go against both common practice and common advice on SO and other forums. For example: A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). ... A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). ... A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. ... The idea of "hypertext" plays a central role - much more so than URI structure or what HTTP verbs mean. "Hypertext" is defined in one of the comments: When I [Fielding] say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction. Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types. I'm guessing at this point, but the first two points above seem to suggest that API documentation for a Foo resource that looks like the following leads to tight coupling between client and server and has no place in a RESTful system. GET /foos/{id} # read a Foo POST /foos/{id} # create a Foo PUT /foos/{id} # update a Foo Instead, an agent should be forced to discover the URIs for all Foos by, for example, issuing a GET request against /foos. (Those URIs may turn out to follow the pattern above, but that's beside the point.) The response uses a media type that is capable of conveying how to access each item and what can be done with it, giving rise to the third point above. For this reason, API documentation should focus on explaining how to interpret the hypertext contained in the response. Furthermore, every time a URI to a Foo resource is requested, the response contains all of the information needed for an agent to discover how to proceed by, for example, accessing associated and parent resources through their URIs, or by taking action after the creation/deletion of a resource. The key to the entire system is that the response consists of hypertext contained in a media type that itself conveys to the agent options for proceeding. It's not unlike the way a browser works for humans. But this is just my best guess at this particular moment. Fielding posted a follow-up in which he responded to criticism that his discussion was too abstract, lacking in examples, and jargon-rich: Others will try to decipher what I have written in ways that are more direct or applicable to some practical concern of today. I probably won’t, because I am too busy grappling with the next topic, preparing for a conference, writing another standard, traveling to some distant place, or just doing the little things that let me feel I have I earned my paycheck. So, two simple questions for the REST experts out there with a practical mindset: how do you interpret what Fielding is saying and how do you put it into practice when documenting/implementing REST APIs? Edit: this question is an example of how hard it can be to learn something if you don't have a name for what you're talking about. The name in this case is "Hypermedia as the Engine of Application State" (HATEOAS).

    Read the article

  • Normal memory usage in Rails

    - by Erik
    I'm wondering how much memory usage is normal for a ruby process in a rails application? I really need something to benchmark against. In my dev environment WEBrick a single ruby process uses about 61mb to handle 10 simultaneous requests going non stop. In my prod environment Apache2+Passenger starts 7 ruby processes to handle the same ammount of requests. Each of those processes also use up about 60mb. Is this normal? Also, where do I configure how many ruby processes Passenger can start? Or will it start as many as there is memory available for? Thank you! ps. Using Rails3 beta. ds.

    Read the article

  • LINQ 4 XML - What is the proper way to query deep in the tree structure?

    - by Keith Barrows
    I have an XML structure that is 4 deep: <?xml version="1.0" encoding="utf-8"?> <EmailRuleList xmlns:xsd="EmailRules.xsd"> <TargetPST name="Tech Communities"> <Parse emailAsList="true" useJustDomain="false" fromAddress="false" toAddress="true"> <EmailRule address="@aspadvice.com" folder="Lists, ASP" saveAttachments="false" /> <EmailRule address="@sqladvice.com" folder="Lists, SQL" saveAttachments="false" /> <EmailRule address="@xmladvice.com" folder="Lists, XML" saveAttachments="false" /> </Parse> <Parse emailAsList="false" useJustDomain="false" fromAddress="false" toAddress="true"> <EmailRule address="[email protected]" folder="Special Interest Groups|Northern Colorado Architects Group" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Support|SpamBayes" saveAttachments="false" /> </Parse> <Parse emailAsList="false" useJustDomain="false" fromAddress="true" toAddress="false"> <EmailRule address="[email protected]" folder="Support|GoDaddy" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Support|No-IP.com" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Discussions|Orchard Project" saveAttachments="false" /> </Parse> <Parse emailAsList="false" useJustDomain="true" fromAddress="true" toAddress="false"> <EmailRule address="@agilejournal.com" folder="Newsletters|Agile Journal" saveAttachments="false"/> <EmailRule address="@axosoft.ccsend.com" folder="Newsletters|Axosoft Newsletter" saveAttachments="false"/> <EmailRule address="@axosoft.com" folder="Newsletters|Axosoft Newsletter" saveAttachments="false"/> <EmailRule address="@cmcrossroads.com" folder="Newsletters|CM Crossroads" saveAttachments="false" /> <EmailRule address="@urbancode.com" folder="Newsletters|Urbancode" saveAttachments="false" /> <EmailRule address="@urbancode.ccsend.com" folder="Newsletters|Urbancode" saveAttachments="false" /> <EmailRule address="@Infragistics.com" folder="Newsletters|Infragistics" saveAttachments="false" /> <EmailRule address="@zdnet.online.com" folder="Newsletters|ZDNet Tech Update Today" saveAttachments="false" /> <EmailRule address="@sqlservercentral.com" folder="Newsletters|SQLServerCentral.com" saveAttachments="false" /> <EmailRule address="@simple-talk.com" folder="Newsletters|Simple-Talk Newsletter" saveAttachments="false" /> </Parse> </TargetPST> <TargetPST name="[Sharpen the Saw]"> <Parse emailAsList="false" useJustDomain="false" fromAddress="false" toAddress="true"> <EmailRule address="[email protected]" folder="Head Geek|Job Alerts" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Social|LinkedIn USMC" saveAttachments="false"/> </Parse> <Parse emailAsList="false" useJustDomain="false" fromAddress="true" toAddress="false"> <EmailRule address="[email protected]" folder="Head Geek|Job Alerts" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Head Geek|Job Alerts" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Social|Cruise Critic" saveAttachments="false"/> </Parse> <Parse emailAsList="false" useJustDomain="true" fromAddress="true" toAddress="false"> <EmailRule address="@moody.edu" folder="Social|5 Love Languages" saveAttachments="false" /> <EmailRule address="@postmaster.twitter.com" folder="Social|Twitter" saveAttachments="false"/> <EmailRule address="@diabetes.org" folder="Physical|American Diabetes Association" saveAttachments="false"/> <EmailRule address="@membership.webshots.com" folder="Social|Webshots" saveAttachments="false"/> </Parse> </TargetPST> </EmailRuleList> Now, I have both an FromAddress and a ToAddress that is parsed from an incoming email. I would like to do a LINQ query against a class set that was deserialized from this XML. For instance: ToAddress = [email protected] FromAddress = [email protected] Query: Get EmailRule.Include(Parse).Include(TargetPST) where address == ToAddress AND Parse.ToAddress==true AND Parse.useJustDomain==false Get EmailRule.Include(Parse).Include(TargetPST) where address == [ToAddress Domain Only] AND Parse.ToAddress==true AND Parse.useJustDomain==true Get EmailRule.Include(Parse).Include(TargetPST) where address == FromAddress AND Parse.FromAddress==true AND Parse.useJustDomain==false Get EmailRule.Include(Parse).Include(TargetPST) where address == [FromAddress Domain Only] AND Parse.FromAddress==true AND Parse.useJustDomain==true I am having a hard time figuring this LINQ query out. I can, of course, loop on all the bits in the XML like so (includes deserialization into objects): XmlSerializer s = new XmlSerializer(typeof(EmailRuleList)); TextReader r = new StreamReader(path); _emailRuleList = (EmailRuleList)s.Deserialize(r); TargetPST[] PSTList = _emailRuleList.Items; foreach (TargetPST targetPST in PSTList) { olRoot = GetRootFolder(targetPST.name); if (olRoot != null) { Parse[] ParseList = targetPST.Items; foreach (Parse parseRules in ParseList) { EmailRule[] EmailRuleList = parseRules.Items; foreach (EmailRule targetFolders in EmailRuleList) { } } } } However, this means going through all these loops for each and every address. It makes more sense to me to query against the Objects. Any tips appreciated!

    Read the article

  • JQuery - drop shadow plugin. Custom color

    - by Ockonal
    Hello, I'm using Jquery plugin DropShadow: web site And I want to set drop shadow color manually. Color is specified in the usual manner, with a color name or hex value. The color parameter does not apply with transparent images. From documentation, so, here is my code: { ... color: "black", swap: false } It works perfect, with '#000' against 'black' it works too... But if I need shadow color, for example, red '#fff000' plugin doesn't work. I don't see any shadow. Why?

    Read the article

  • Design for Vacation Tracking System

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?)

    Read the article

  • Dynamic Data Associate Related Table Value?

    - by davemackey
    I have create a LINQ-to-SQL project in Visual Studio 2010 using Dynamic Data. In this project I have two tables. One is called phones_extension and the other phones_ten. The list of columns in phones_extension looks like this: id, extension, prefix, did_flag, len, ten_id, restriction_class_id, sfc_id, name_display, building_id, floor, room, phone_id, department_id In phones_ten it looks like this: id, name, pbxid Now, I'd like to be able to somehow make it so that there is an association (or inheritance?) that essentially results in me being able to make a query like phones_extension.ten and it gives me the result of phones_ten.name. Right now I have to get phones_extension.ten_id and then match that against phones_ten.id - I'm trying to get the DBML to handle this translation automatically. Is this possible?

    Read the article

  • LWJGL not working

    - by Jesse Welch
    I'm working on a homework assignment to modify code given by my professor using LightWeight Java Game Library. The problem is that I can't fully load the test code to begin testing modifications. I've linked against the jar file as it says to in the modifications, but I still have one lingering error. The import statement import org.lwjgl.util.glu.*; Cannot be resolved, so I have errors on the following lines, spread throughout the code: textures[0] = GLApp.makeTexture("green.bmp"); GLU.gluPerspective(45.0f, (float)Display.getDisplayMode().getWidth() / (float)Display.getDisplayMode().getHeight(), 0.1f, 100.0f); GLU.gluLookAt(cameraX, cameraY, cameraZ, lookX, lookY, lookZ, 0.0f, 1.0f, 0.0f);` Any ideas on what is going wrong?

    Read the article

  • MySQL query performance - 100Mb ethernet vs 1Gb ethernet

    - by Rob Penridge
    Hi All I've just started a new job and noticed that the analysts computers are connected to the network at 100Mbps. The queries we run against the MySQL server can easily be 500MB+ and it seems at times when the servers are under high load the DBAs kill low priority jobs as they are taking too long to run. My question is this... How much of this server time is spent executing the request, and how much time is spent returning the data to the client? Could the query speeds be improved by upgrading the network connections to 1Gbps? Thanks Rob

    Read the article

  • wcf data service security configuration

    - by Daniel Pratt
    I'm in the process of setting up a WCF Data Services web service and I'm trying to sort out the security configuration. Although there's quite a lot of documentation out there for configuring WCF security, a lot of it seems to be outmoded or does not apply to my scenario. Ultimately, I am planning on managing authorization of operations via change interceptors. Thus, all I really need is the simplest way to permit a client to pass credentials along with a request and to be able to authenticate those credentials against either AD or an ASP.NET membership provider (I'd much prefer the latter unless it makes things much more complicated). I'm intending to manage encryption at the transport level (i.e. HTTPS). I'm hoping that the eventual solution does not involve a huge web.config. Likewise, I'd much prefer to avoid writing custom code for the purpose of authentication.

    Read the article

  • Add 2 values to 1 key in a PHP array

    - by Mike Munroe
    I have a result set of data that I want to write to an array in php. Here is my sample data: **Name** **Abbrev** Mike M Tom T Jim J Using that data, I want to create an array in php that is of the following: 1|Mike|M 2|Tom|T 3|Jim|j I tried array_push($values, 'name', 'abbreviation') [pseudo code], which gave me the following: 1|Mike 2|M 3|Tom 4|T 5|Jim 6|J I need to do a look up against this array to get the same key value, if I look up "Mike" or "M". What is the best way to write my result set into an array as set above where name and abbreviation share the same key?

    Read the article

  • Regular expression for Regular expressions?

    - by kavoir.com
    I have an app that enables the user to input a regular expression, my question is how to check against any input of regular expressions and make sure they are valid ones because if they are not there will be preg_match errors. I don't want to use the '@' before preg_match, so if there's a way to check the validity of the user input of regular expressions that'd be great. The regular expression system of PHP seems to be rather too complicated for me to come up with a regular expression for them. Any idea or any alternatives would be possible in achieving this?

    Read the article

  • webrat, rspec, nokogiri segfault

    - by adamaig
    I'm getting a segfault in nokogiri (1.4.1) run (under cucumber 0.6.1/webrat 0.7.0/rspec 1.3.x) response.should have_selector("div", :class => "fieldWithErrors") and the div in the page is actually <div class="fieldWithErrors validation_error"> stuff </div> Everything runs fine if I just test nokogiri against a test document >> require 'nokogiri' >> doc = Nokogiri::HTML.parse("<div class='a b'>love to have problems</div>") => ... >> doc.css(".a") => [#<Nokogiri::XML::Element:0x3d62ac name="div" attributes=[#<Nokogiri::XML::Attr:0x3d6258 name="class" value="a b">] children=[#<Nokogiri::XML::Text:0x3d5e68 "love to have problems">]>] So I want to know how to setup a minimal webrat test of an html fragment document to help file a bug.

    Read the article

  • .NET COM Interop with references to other libraries

    - by user262190
    Hello,I'm up against a problem when loading a class in a managed library from a COM Interop library. basically I have some Unmanaged C++ code and a COM Interop library written in C#. And finally a 3rd library which is referenced by the COM Interop library which contains a class: public class MyClass{ public MyClass(){} } What I'd like to do is from my unmanaged c++ code, call a function in the Interop library The C++ code doesn't need to know of the existence of the third library, it's only used within the Interop. Init(){ MyClass _class = new MyClass(); } for some reason this line in Init fails "MyClass _class = new MyClass();", and I don't get very usefull error messages, all I have to go on is a few of these in my output window: "First-chance exception at 0x7c812afb in DotNet_Com_Call.exe: Microsoft C++ exception: [rethrow] at memory location 0x00000000.." and the HRESULT of the "HRESULT hr = pDotNetCOMPtr-Init();" line in my C++ code is "The system cannot find the specified file" I'm new to COM so if anyone has any ideas or pointer to get me going the right direction, I'd appreciate it, Thanks

    Read the article

  • [PHP] Sanitizing strings to make them URL and filename safe?

    - by Xeoncross
    I am trying to come up with a function that does a good job of sanitizing certain strings so that they are safe to use in the URL (like a post slug) and also safe to use as file names. For example, when someone uploads a file I want to make sure that I remove all dangerous characters from the name. So far I have come up with the following function which I hope solves this problem and also allows foreign UTF-8 data also. /** * Convert a string to the file/URL safe "slug" form * * @param string $string the string to clean * @param bool $is_filename TRUE will allow additional filename characters * @return string */ function sanitize($string = '', $is_filename = FALSE) { // Replace all weird characters with dashes preg_replace('/[^\w\-'. ($is_filename ? '*~_\.' : ''). ']+/u', '-', $string); // Only allow one dash separator at a time (and make string lowercase) return mb_strtolower(preg_replace('/--+/u', '-', $string), 'UTF-8'); } Does anyone have any tricky sample data I can run against this - or know of a better way to safeguard our apps from bad names?

    Read the article

  • SQL WHERE clause not returning rows when field has NULL value

    - by JohnB
    Ok, so I'm aware of this issue: When SET ANSI_NULLS is ON, all comparisons against a null value evaluate to UNKNOWN SQL And NULL Values in where clause SQL Server return Rows that are not equal < to a value and NULL However, I am trying to query a DataTable. I could add to my query: OR col_1 IS NULL OR col_2 IS NULL for every column, but my table has 47 columns, and I'm building dynamic SQL (string concatenation), and it just seems like a pain to do that. Is there another solution? I was to bring back all the rows that have NULL values in the WHERE comparison. UPDATE Example of query that gave me problems: string query = col_1 not in ('Dorothy', 'Blanche') and col_2 not in ('Zborna', 'Devereaux') grid.DataContext = dataTable.Select(query).CopyToDataTable(); (didn't retrieve rows if/when col_1 = null and/or col_2 = null)

    Read the article

  • sqlite - any improvements for this attach code (running multiple sql commands transactionally in sql

    - by Greg
    Hi, Is this code solid? I've tried to use "using" etc. Basically a method to pass as sequenced list of SQL commands to be run against a Sqlite database. I assume it is true that in sqlite by default all commands run in a single connection are handled transactionally? Is this true? i.e. I should not have to (and haven't got in the code at the moment) a BeginTransaction, or CommitTransaction. It's using http://sqlite.phxsoftware.com/ as the sqlite ADO.net database provider. private int ExecuteNonQueryTransactionally(List<string> sqlList) { int totalRowsUpdated = 0; using (var conn = new SQLiteConnection(_connectionString)) { // Open connection (one connection so should be transactional - confirm) conn.Open(); // Apply each SQL statement passed in to sqlList foreach (string s in sqlList) { using (var cmd = new SQLiteCommand(conn)) { cmd.CommandText = s; totalRowsUpdated = totalRowsUpdated + cmd.ExecuteNonQuery(); } } } return totalRowsUpdated; }

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >