Search Results

Search found 9598 results on 384 pages for 'product rec'.

Page 119/384 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • How can I use a Perl hash key that has a literal dot?

    - by imerez
    I have a hash in Perl which has been dumped into from some legacy code the name of the key has now changed from simply reqHdrs to reqHdrs.bla $rec->{reqHdrs.bla} My problem is now I cant seem to access this field from the hash any ideas? The following is my error Download Script Output: Bareword "reqHdrs" not allowed while "strict subs" in use

    Read the article

  • perl - getting a value from a map where the key has a dot

    - by imerez
    I have a map in perl which has been dumped into from some legacy code the name of the key has now changed from simply reqHdrs to reqHdrs.bla $rec->{reqHdrs.bla} My problem is now I cant seem to access this field from the map any ideas ? The following is my error Download Script Output: Bareword "reqHdrs" not allowed while "strict subs" in use

    Read the article

  • MySQL DATE_FORMAT comparison to CURDATE() query...

    - by Crazy Serb
    Hey guys, I am just trying to pull all the records from my database who have a rec_date (varchar) stored as m/d/Y and are expired (as in, less than curdate()), and this call isn't giving me what I want: SELECT member_id, status, DATE_FORMAT(STR_TO_DATE(rec_date, '%m/%d/%Y'), '%Y-%m-%d') AS rec FROM members WHERE rec_date CURDATE() AND status = '1' I'm obviously doing something wrong, so can you help? Thanks.

    Read the article

  • get greatest prime factor in F#

    - by Alex
    I had VS 11 beta and the following code was working without problem: let rec fac x y = if (x = y) then y elif (x % y = 0I) then fac (x / y) y else fac x / (y + 1I);; Now I installed VS 2012 RC and I get the following error: The type 'System.Numerics.BigInteger -> System.Numerics.BigInteger' is not compatible with the type 'System.Numerics.BigInteger' Is code not correct or F# interactive? It's F# 3.0.

    Read the article

  • Red Gate Coder interviews: Robin Hellen

    - by Michael Williamson
    Robin Hellen is a test engineer here at Red Gate, and is also the latest coder I’ve interviewed. We chatted about debugging code, the roles of software engineers and testers, and why Vala is currently his favourite programming language. How did you get started with programming?It started when I was about six. My dad’s a professional programmer, and he gave me and my sister one of his old computers and taught us a bit about programming. It was an old Amiga 500 with a variant of BASIC. I don’t think I ever successfully completed anything! It was just faffing around. I didn’t really get anywhere with it.But then presumably you did get somewhere with it at some point.At some point. The PC emerged as the dominant platform, and I learnt a bit of Visual Basic. I didn’t really do much, just a couple of quick hacky things. A bit of demo animation. Took me a long time to get anywhere with programming, really.When did you feel like you did start to get somewhere?I think it was when I started doing things for someone else, which was my sister’s final year of university project. She called up my dad two days before she was due to submit, saying “We need something to display a graph!”. Dad says, “I’m too busy, go talk to your brother”. So I hacked up this ugly piece of code, sent it off and they won a prize for that project. Apparently, the graph, the bit that I wrote, was the reason they won a prize! That was when I first felt that I’d actually done something that was worthwhile. That was my first real bit of code, and the ugliest code I’ve ever written. It’s basically an array of pre-drawn line elements that I shifted round the screen to draw a very spikey graph.When did you decide that programming might actually be something that you wanted to do as a career?It’s not really a decision I took, I always wanted to do something with computers. And I had to take a gap year for uni, so I was looking for twelve month internships. I applied to Red Gate, and they gave me a job as a tester. And that’s where I really started having to write code well. To a better standard that I had been up to that point.How did you find coming to Red Gate and working with other coders?I thought it was really nice. I learnt so much just from other people around. I think one of the things that’s really great is that people are just willing to help you learn. Instead of “Don’t you know that, you’re so stupid”, it’s “You can just do it this way”.If you could go back to the very start of that internship, is there something that you would tell yourself?Write shorter code. I have a tendency to write massive, many-thousand line files that I break out of right at the end. And then half-way through a project I’m doing something, I think “Where did I write that bit that does that thing?”, and it’s almost impossible to find. I wrote some horrendous code when I started. Just that principle, just keep things short. Even if looks a bit crazy to be jumping around all over the place all of the time, it’s actually a lot more understandable.And how do you hold yourself to that?Generally, if a function’s going off my screen, it’s probably too long. That’s what I tell myself, and within the team here we have code reviews, so the guys I’m with at the moment are pretty good at pulling me up on, “Doesn’t that look like it’s getting a bit long?”. It’s more just the subjective standard of readability than anything.So you’re an advocate of code review?Yes, definitely. Both to spot errors that you might have made, and to improve your knowledge. The person you’re reviewing will say “Oh, you could have done it that way”. That’s how we learn, by talking to others, and also just sharing knowledge of how your project works around the team, or even outside the team. Definitely a very firm advocate of code reviews.Do you think there’s more we could do with them?I don’t know. We’re struggling with how to add them as part of the process without it becoming too cumbersome. We’ve experimented with a few different ways, and we’ve not found anything that just works.To get more into the nitty gritty: how do you like to debug code?The first thing is to do it in my head. I’ll actually think what piece of code is likely to have caused that error, and take a quick look at it, just to see if there’s anything glaringly obvious there. The next thing I’ll probably do is throw in print statements, or throw some exceptions from various points, just to check: is it going through the code path I expect it to? A last resort is to actually debug code using a debugger.Why is the debugger the last resort?Probably because of the environments I learnt programming in. VB and early BASIC didn’t have much of a debugger, the only way to find out what your program was doing was to add print statements. Also, because a lot of the stuff I tend to work with is non-interactive, if it’s something that takes a long time to run, I can throw in the print statements, set a run off, go and do something else, and look at it again later, rather than trying to remember what happened at that point when I was debugging through it. So it also gives me the record of what happens. I hate just sitting there pressing F5, F5, continually. If you’re having to find out what your code is doing at each line, you’ve probably got a very wrong mental model of what your code’s doing, and you can find that out just as easily by inspecting a couple of values through the print statements.If I were on some codebase that you were also working on, what should I do to make it as easy as possible to understand?I’d say short and well-named methods. The one thing I like to do when I’m looking at code is to find out where a value comes from, and the more layers of indirection there are, particularly DI [dependency injection] frameworks, the harder it is to find out where something’s come from. I really hate that. I want to know if the value come from the user here or is a constant here, and if I can’t find that out, that makes code very hard to understand for me.As a tester, where do you think the split should lie between software engineers and testers?I think the split is less on areas of the code you write and more what you’re designing and creating. The developers put a structure on the code, while my major role is to say which tests we should have, whether we should test that, or it’s not worth testing that because it’s a tiny function in code that nobody’s ever actually going to see. So it’s not a split in the code, it’s a split in what you’re thinking about. Saying what code we should write, but alternatively what code we should take out.In your experience, do the software engineers tend to do much testing themselves?They tend to control the lowest layer of tests. And, depending on how the balance of people is in the team, they might write some of the higher levels of test. Or that might go to the testers. I’m the only tester on my team with three other developers, so they’ll be writing quite a lot of the actual test code, with input from me as to whether we should test that functionality, whereas on other teams, where it’s been more equal numbers, the testers have written pretty much all of the high level tests, just because that’s the best use of resource.If you could shuffle resources around however you liked, do you think that the developers should be writing those high-level tests?I think they should be writing them occasionally. It helps when they have an understanding of how testing code works and possibly what assumptions we’ve made in tests, and they can say “actually, it doesn’t work like that under the hood so you’ve missed this whole area”. It’s one of those agile things that everyone on the team should be at least comfortable doing the various jobs. So if the developers can write test code then I think that’s a very good thing.So you think testers should be able to write production code?Yes, although given most testers skills at coding, I wouldn’t advise it too much! I have written a few things, and I did make a few changes that have actually gone into our production code base. They’re not necessarily running every time but they are there. I think having that mix of skill sets is really useful. In some ways we’re using our own product to test itself, so being able to make those changes where it’s not working saves me a round-trip through the developers. It can be really annoying if the developers have no time to make a change, and I can’t touch the code.If the software engineers are consistently writing tests at all levels, what role do you think the role of a tester is?I think on a team like that, those distinctions aren’t quite so useful. There’ll be two cases. There’s either the case where the developers think they’ve written good tests, but you still need someone with a test engineer mind-set to go through the tests and validate that it’s a useful set, or the correct set for that code. Or they won’t actually be pure developers, they’ll have that mix of test ability in there.I think having slightly more distinct roles is useful. When it starts to blur, then you lose that view of the tests as a whole. The tester job is not to create tests, it’s to validate the quality of the product, and you don’t do that just by writing tests. There’s more things you’ve got to keep in your mind. And I think when you blur the roles, you start to lose that end of the tester.So because you’re working on those features, you lose that holistic view of the whole system?Yeah, and anyone who’s worked on the feature shouldn’t be testing it. You always need to have it tested it by someone who didn’t write it. Otherwise you’re a bit too close and you assume “yes, people will only use it that way”, but the tester will come along and go “how do people use this? How would our most idiotic user use this?”. I might not test that because it might be completely irrelevant. But it’s coming in and trying to have a different set of assumptions.Are you a believer that it should all be automated if possible?Not entirely. So an automated test is always better than a manual test for the long-term, but there’s still nothing that beats a human sitting in front of the application and thinking “What could I do at this point?”. The automated test is very good but they follow that strict path, and they never check anything off the path. The human tester will look at things that they weren’t expecting, whereas the automated test can only ever go “Is that value correct?” in many respects, and it won’t notice that on the other side of the screen you’re showing something completely wrong. And that value might have been checked independently, but you always find a few odd interactions when you’re going through something manually, and you always need to go through something manually to start with anyway, otherwise you won’t know where the important bits to write your automation are.When you’re doing that manual testing, do you think it’s important to do that across the entire product, or just the bits that you’ve touched recently?I think it’s important to do it mostly on the bits you’ve touched, but you can’t ignore the rest of the product. Unless you’re dealing with a very, very self-contained bit, you’re almost always encounter other bits of the product along the way. Most testers I know, even if they are looking at just one path, they’ll keep open and move around a bit anyway, just because they want to find something that’s broken. If we find that your path is right, we’ll go out and hunt something else.How do you think this fits into the idea of continuously deploying, so long as the tests pass?With deploying a website it’s a bit different because you can always pull it back. If you’re deploying an application to customers, when you’ve released it, it’s out there, you can’t pull it back. Someone’s going to keep it, no matter how hard you try there will be a few installations that stay around. So I’d always have at least a human element on that path. With websites, you could probably automate straight out, or at least straight out to an internal environment or a single server in a cloud of fifty that will serve some people. But I don’t think you should release to everyone just on automated tests passing.You’ve already mentioned using BASIC and C# — are there any other languages that you’ve used?I’ve used a few. That’s something that has changed more recently, I’ve become familiar with more languages. Before I started at Red Gate I learnt a bit of C. Then last year, I taught myself Python which I actually really enjoyed using. I’ve also come across another language called Vala, which is sort of a C#-like language. It’s basically a pre-processor for C, but it has very nice syntax. I think that’s currently my favourite language.Any particular reason for trying Vala?I have a completely Linux environment at home, and I’ve been looking for a nice language, and C# just doesn’t cut it because I won’t touch Mono. So, I was looking for something like C# but that was useable in an open source environment, and Vala’s what I found. C#’s got a few features that Vala doesn’t, and Vala’s got a few features where I think “It would be awesome if C# had that”.What are some of the features that it’s missing?Extension methods. And I think that’s the only one that really bugs me. I like to use them when I’m writing C# because it makes some things really easy, especially with libraries that you can’t touch the internals of. It doesn’t have method overloading, which is sometimes annoying.Where it does win over C#?Everything is non-nullable by default, you never have to check that something’s unexpectedly null.Also, Vala has code contracts. This is starting to come in C# 4, but the way it works in Vala is that you specify requirements in short phrases as part of your function signature and they stick to the signature, so that when you inherit it, it has exactly the same code contract as the base one, or when you inherit from an interface, you have to match the signature exactly. Just using those makes you think a bit more about how you’re writing your method, it’s not an afterthought when you’ve got contracts from base classes given to you, you can’t change it. Which I think is a lot nicer than the way C# handles it. When are those actually checked?They’re checked both at compile and run-time. The compile-time checking isn’t very strong yet, it’s quite a new feature in the compiler, and because it compiles down to C, you can write C code and interface with your methods, so you can bypass that compile-time check anyway. So there’s an extra runtime check, and if you violate one of the contracts at runtime, it’s game over for your program, there’s no exception to catch, it’s just goodbye!One thing I dislike about C# is the exceptions. You write a bit of code and fifty exceptions could come from any point in your ten lines, and you can’t mentally model how those exceptions are going to come out, and you can’t even predict them based on the functions you’re calling, because if you’ve accidentally got a derived class there instead of a base class, that can throw a completely different set of exceptions. So I’ve got no way of mentally modelling those, whereas in Vala they’re checked like Java, so you know only these exceptions can come out. You know in advance the error conditions.I think Raymond Chen on Old New Thing says “the only thing you know when you throw an exception is that you’re in an invalid state somewhere in your program, so just kill it and be done with it!”You said you’ve also learnt bits of Python. How did you find that compared to Vala and C#?Very different because of the dynamic typing. I’ve been writing a website for my own use. I’m quite into photography, so I take photos off my camera, post-process them, dump them in a file, and I get a webpage with all my thumbnails. So sort of like Picassa, but written by myself because I wanted something to learn Python with. There are some things that are really nice, I just found it really difficult to cope with the fact that I’m not quite sure what this object type that I’m passed is, I might not ever be sure, so it can randomly blow up on me. But once I train myself to ignore that and just say “well, I’m fairly sure it’s going to be something that looks like this, so I’ll use it like this”, then it’s quite nice.Any particular features that you’ve appreciated?I don’t like any particular feature, it’s just very straightforward to work with. It’s very quick to write something in, particularly as you don’t have to worry that you’ve changed something that affects a different part of the program. If you have, then that part blows up, but I can get this part working right now.If you were doing a big project, would you be willing to do it in Python rather than C# or Vala?I think I might be willing to try something bigger or long term with Python. We’re currently doing an ASP.NET MVC project on C#, and I don’t like the amount of reflection. There’s a lot of magic that pulls values out, and it’s all done under the scenes. It’s almost managed to put a dynamic type system on top of C#, which in many ways destroys the language to me, whereas if you’re already in a dynamic language, having things done dynamically is much more natural. In many ways, you get the worst of both worlds. I think for web projects, I would go with Python again, whereas for anything desktop, command-line or GUI-based, I’d probably go for C# or Vala, depending on what environment I’m in.It’s the fact that you can gain from the strong typing in ways that you can’t so much on the web app. Or, in a web app, you have to use dynamic typing at some point, or you have to write a hell of a lot of boilerplate, and I’d rather use the dynamic typing than write the boilerplate.What do you think separates great programmers from everyone else?Probably design choices. Choosing to write it a piece of code one way or another. For any given program you ask me to write, I could probably do it five thousand ways. A programmer who is capable will see four or five of them, and choose one of the better ones. The excellent programmer will see the largest proportion and manage to pick the best one very quickly without having to think too much about it. I think that’s probably what separates, is the speed at which they can see what’s the best path to write the program in. More Red Gater Coder interviews

    Read the article

  • Mpd as pppoe server with authorisation by freeradius2

    - by Korjavin Ivan
    I install freeradius2, add to raddb/users: test Cleartext-Password := "test1" Service-Type = Framed-User, Framed-Protocol = PPP, Framed-IP-Address = 10.36.0.2, Framed-IP-Netmask = 255.255.255.0, start radiusd, and check auth: radtest test test1 127.0.0.1 1002 testing123 Sending Access-Request of id 199 to 127.0.0.1 port 1812 User-Name = "test" User-Password = "test1" NAS-IP-Address = 127.0.0.1 NAS-Port = 1002 Message-Authenticator = 0x00000000000000000000000000000000 rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=199, length=44 Service-Type = Framed-User Framed-Protocol = PPP Framed-IP-Address = 10.36.0.2 Framed-IP-Netmask = 255.255.255.0 Works fine. Next step. Add to mpd.conf: radius: set auth disable internal set auth max-logins 1 CI set auth enable radius-auth set radius timeout 90 set radius retries 2 set radius server 127.0.0.1 testing123 1812 1813 set radius me 127.0.0.1 create link template L pppoe set link action bundle B set link max-children 1000 set link no multilink set link no shortseq set link no pap chap-md5 chap-msv1 chap-msv2 set link enable chap set pppoe acname Internet load radius create link template em1 L set pppoe iface em1 set link enable incoming And trying to connect, auth failed, here is mpd log: mpd: [em1-2] LCP: auth: peer wants nothing, I want CHAP mpd: [em1-2] CHAP: sending CHALLENGE #1 len: 21 mpd: [em1-2] LCP: LayerUp mpd: [em1-2] CHAP: rec'd RESPONSE #1 len: 58 mpd: [em1-2] Name: "test" mpd: [em1-2] AUTH: Trying RADIUS mpd: [em1-2] RADIUS: Authenticating user 'test' mpd: [em1-2] RADIUS: Rec'd RAD_ACCESS_REJECT for user 'test' mpd: [em1-2] AUTH: RADIUS returned: failed mpd: [em1-2] AUTH: ran out of backends mpd: [em1-2] CHAP: Auth return status: failed mpd: [em1-2] CHAP: Reply message: ^AE=691 R=1 mpd: [em1-2] CHAP: sending FAILURE #1 len: 14 mpd: [em1-2] LCP: authorization failed Then i start freeradius as radiusd -fX, and get this log: rad_recv: Access-Request packet from host 127.0.0.1 port 46400, id=223, length=282 NAS-Identifier = "rubin.svyaz-nt.ru" NAS-IP-Address = 127.0.0.1 Message-Authenticator = 0x14d36639bed8074ec2988118125367ea Acct-Session-Id = "815965-em1-2" NAS-Port = 2 NAS-Port-Type = Ethernet Service-Type = Framed-User Framed-Protocol = PPP Calling-Station-Id = "00e05290b3e3 / 00:e0:52:90:b3:e3 / em1" NAS-Port-Id = "em1" Vendor-12341-Attr-12 = 0x656d312d32 Tunnel-Medium-Type:0 = IEEE-802 Tunnel-Client-Endpoint:0 = "00:e0:52:90:b3:e3" User-Name = "test" MS-CHAP-Challenge = 0xbb1e68d5bbc30f228725a133877de83e MS-CHAP2-Response = 0x010088746ae65b68e435e9d045ad6f9569b60000000000000000b56991b4f20704cb6c68e5982eec5e98a7f4b470c109c1b9 # Executing section authorize from file /usr/local/etc/raddb/sites-enabled/default +- entering group authorize {...} ++[preprocess] returns ok ++[chap] returns noop [mschap] Found MS-CHAP attributes. Setting 'Auth-Type = mschap' ++[mschap] returns ok [eap] No EAP-Message, not doing EAP ++[eap] returns noop [files] users: Matched entry DEFAULT at line 172 ++[files] returns ok Found Auth-Type = MSCHAP # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group MS-CHAP {...} [mschap] No Cleartext-Password configured. Cannot create LM-Password. [mschap] No Cleartext-Password configured. Cannot create NT-Password. [mschap] Creating challenge hash with username: test [mschap] Client is using MS-CHAPv2 for test, we need NT-Password [mschap] FAILED: No NT/LM-Password. Cannot perform authentication. [mschap] FAILED: MS-CHAP2-Response is incorrect ++[mschap] returns reject Failed to authenticate the user. Login incorrect: [test] (from client localhost port 2 cli 00e05290b3e3 / 00:e0:52:90:b3:e3 / em1) Using Post-Auth-Type REJECT # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group REJECT {...} [attr_filter.access_reject] expand: %{User-Name} -> test attr_filter: Matched entry DEFAULT at line 11 ++[attr_filter.access_reject] returns updated Delaying reject of request 2 for 1 seconds Going to the next request Waking up in 0.9 seconds. Sending delayed reject for request 2 Sending Access-Reject of id 223 to 127.0.0.1 port 46400 MS-CHAP-Error = "\001E=691 R=1" Why i have error "[mschap] No Cleartext-Password configured. Cannot create LM-Password." ? I define cleartext-password in users. I check raddb/sites-enabled/default authorize { chap mschap eap { ok = return } files } looks ok for me. Whats wrong with mpd/chap/radius ?

    Read the article

  • Entity Framework: Delete Object and its related entities

    - by Waheed
    Hi, Does anyone know how to delete an object and all of it's related entities. For example i have tables, Products, Category, ProductCategory and productDetails, the productCategory is joining table of both Product and Category. I have red from http://msdn.microsoft.com/en-us/library/bb738580.aspx that Deleting the parent object also deletes all the child objects in the constrained relationship. This result is the same as enabling the CascadeDelete property on the association for the relationship. I am using this code Product productObj = this.ObjectContext.Product.Where(p => p.ProductID.Equals(productID)).First(); if (!productObj.ProductCategory.IsLoaded) productObj.ProductCategory.Load(); if (!productObj.ProductDetails.IsLoaded) productObj.ProductDetails.Load(); //my own methods. base.Delete(productObj); base.SaveAllObjectChanges(); But i am getting error on ObjectContext.SaveChanges(); i.e A relationship is being added or deleted from an AssociationSet 'FK_ProductCategory_Product'. With cardinality constraints, a corresponding 'ProductCategory' must also be added or deleted. Thanks in advance....

    Read the article

  • WiX/Windows Installer: Re-install to a new folder

    - by vitalyval
    1. I am using WiX for creating installer and would like to implement the following behaviour: If a user launches msi installer for the product and the product already installed, then wizard works similar to pure (first time) installation with exception of some things (e.g. license aggrement screen is omitted). The wizard should allow for example to change installation folder, select whether to place desktop shortcut,... I tried to do: <Publish Event="ReinstallMode" Value="amus"><![CDATA[INSTALL_MODE = "Change"]]></Publish> <Publish Event="Reinstall" Value="ALL"><![CDATA[INSTALL_MODE = "Change"]]></Publish> But after installation completes: the product is in the same folder, where it was installed first time; desktop icon in the same state as it was after first time install. MSDN says: "Do not attempt to change the target directory path if some components that use the path are already installed for the current user or for a different user". Is there a way to re-install in another forlder and add/remove desktop icon in re-install? 2. Is this normal to use the same KeyPath for some components? For example the same registry values for DeskTop and Programs menu shortcuts? MSDN says: "Two components cannot share the same key path value". But compiling and verifying goes OK. And I did not discover problems using the same keypaths.

    Read the article

  • jQuery UI Dialog pass on variables

    - by Dante
    Hi, I'm creating a Web interface for a table in Mysql and want to use jQuery dialog for input and edit. I have the following code to start from: $("#content_new").dialog({ autoOpen: false, height: 350, width: 300, modal: true, buttons: { 'Create an account': function() { alert('add this product'); }, Cancel: function() { $(this).dialog('close'); $.validationEngine.closePrompt(".formError",true); } }, closeText: "Sluiten", title: "Voeg een nieuw product toe", open: function(ev, ui) { /* get the id and fill in the boxes */ }, close: function(ev, ui) { $.validationEngine.closePrompt(".formError",true); } }); $("#newproduct").click(function(){ $("#content_new").dialog('open'); }); $(".editproduct").click(function(){ var test = this.id; alert("id = " + test); }); So when a link with the class 'editproduct' is clicked it gets the id from that product and I want it to get to the open function of my dialog. Am I on the right track and can someone help me getting that variable there. Thanks in advance.

    Read the article

  • Nokogiri pull parser (Nokogiri::XML::Reader) issue with self closing tag

    - by Vlad Zloteanu
    I have a huge XML(400MB) containing products. Using a DOM parser is therefore excluded, so i tried to parse and process it using a pull parser. Below is a snippet from the each_product(&block) method where i iterate over the product list. Basically, using a stack, i transform each <product> ... </product> node into a hash and process it. while (reader.read) case reader.node_type #start element when Nokogiri::XML::Node::ELEMENT_NODE elem_name = reader.name.to_s stack.push([elem_name, {}]) #text element when Nokogiri::XML::Node::TEXT_NODE, Nokogiri::XML::Node::CDATA_SECTION_NODE stack.last[1] = reader.value #end element when Nokogiri::XML::Node::ELEMENT_DECL return if stack.empty? elem = stack.pop parent = stack.last if parent.nil? yield(elem[1]) elem = nil next end key = elem[0] parent_childs = parent[1] # ... parent_childs[key] = elem[1] end The issue is on self-closing tags (EG <country/>), as i can not make the difference between a 'normal' and a 'self-closing' tag. They both are of type Nokogiri::XML::Node::ELEMENT_NODE and i am not able to find any other discriminator in the documentation. Any ideas on how to solve this issue?

    Read the article

  • Modify python USB device driver to only use vendor_id and product_id, excluding BCD

    - by Tony
    I'm trying to modify the Android device driver for calibre (an e-book management program) so that it identifies devices by only vendor id and product id, and excludes BCD. The driver is a fairly simply python plugin, and is currently set up to use all three numbers, but apparently, when Android devices use custom Android builds (ie CyanogenMod for the Nexus One), it changes the BCD so calibre stops recognizing it. The current code looks like this, with a simple list of vendor id's, that then have allowed product id's and BCD's with them: VENDOR_ID = { 0x0bb4 : { 0x0c02 : [0x100], 0x0c01 : [0x100]}, 0x22b8 : { 0x41d9 : [0x216]}, 0x18d1 : { 0x4e11 : [0x0100], 0x4e12: [0x0100]}, 0x04e8 : { 0x681d : [0x0222]}, } The line I'm specifically trying to change is: 0x18d1 : { 0x4e11 : [0x0100], 0x4e12: [0x0100]}, Which is, the line for identifying a Nexus One. My N1, running CyanogenMod 5.0.5, has the BCD 0x0226, and rather than just adding it to the list, I'd prefer to eliminate the BCD from the recognition process, so that any device with vendor id 0x18d1 and product id 0x4e11 or 0x4e12 would be recognized. The custom Android rom doesn't change enough for the specifics to matter. The syntax seems to require the BCD in brackets. How can I edit this so that it matches anything in that field?

    Read the article

  • HTML to RTF Converter for .NET

    - by nickyt
    I've already seen lots of posts on the site for RTF to HTML and some other posts talking about some HTML to RTF converters, but I'm really trying to get a full breakdown of what is considered the most widely used commercial product, open source product or if people recommend going home grown. Apologies if you consider this a duplicate question, but I'm trying to create a product matrix to see what is the most viable for our application. I also think this would be helpful for others. The converter would be used in an ASP.NET 2.0 application (we're upgrading to 3.5 shortly but still sticking with WebForms) using SQLServer 2005 (soon 2008) as the DB. From reading a few posts, SautinSoft appears to be popular as a commercial component. Are there other commercial components that you'd recommend for converting HTML to RTF? Price does matter, but even if it's a little on the expensive side, please list it. For open source, I read that OpenOffice.org can be run as a service so that it can convert files. However, this appears to be only Java based. I imagine, I'd need some kind of interop to use this? What .NET open source components, if any, are out there for converting HTML to RTF? For home grown, is an XSLT the way to go with XHTML? If so, what component do you recommend for generating XHTML? Otherwise, what other home grown avenuses do you recommend. Also, please note that I currently don't care so much about RTF to HTML. If a commercial component offers this and the price is still the same, fine, otherwise please don't mention it.

    Read the article

  • Complex Types, ModelBinders and Interfaces

    - by Kieron
    Hi, I've a scenario where I need to bind to an interface - in order to create the correct type, I've got a custom model binder that knows how to create the correct concrete type (which can differ). However, the type created never has the fields correctly filled in. I know I'm missing something blindingly simple here, but can anyone tell me why or at least what I need to do for the model binder to carry on it's work and bind the properties? public class ProductModelBinder : DefaultModelBinder { override public object BindModel (ControllerContext controllerContext, ModelBindingContext bindingContext) { if (bindingContext.ModelType == typeof (IProduct)) { var content = GetProduct (bindingContext); return content; } var result = base.BindModel (controllerContext, bindingContext); return result; } IProduct GetProduct (ModelBindingContext bindingContext) { var idProvider = bindingContext.ValueProvider.GetValue ("Id"); var id = (Guid)idProvider.ConvertTo (typeof (Guid)); var repository = RepositoryFactory.GetRepository<IProductRepository> (); var product = repository.Get (id); return product; } } The Model in my case is a complex type that has an IProduct property, and it's those values I need filled in. Model: [ProductBinder] public class Edit : IProductModel { public Guid Id { get; set; } public byte[] Version { get; set; } public IProduct Product { get; set; } }

    Read the article

  • Telerik MVC Combobox AutoComplete error

    - by Roachmans
    I am using the Telerik autocomplete option In the header: <script type="text/javascript"> function onAutoCompleteDataBinding(e) { var autocomplete = $('#AutoComplete').data('tAutoComplete'); autocomplete.dataBind(["Product 1", "Product 2", "Product 3"]} </script> In the body of the View: <%=Html.Telerik().AutoComplete() .Name("AutoComplete") .ClientEvents(events => events.OnDataBinding("onAutoCompleteDataBinding")) %> http://demos.telerik.com/aspnet-mvc/combobox/clientsidebinding I have managed to get this working on other applications and it is really quite simple. I have pasted this example on top to show that this one also bombs out on : this.trigger = new $t.list.trigger(this); Think i might have mixed up the .js files and now my auto complete is not working. Any sugesions which js files and in what order they must be for this to work right My Page master relevant parts: <body> <% Html.Telerik().ScriptRegistrar() .DefaultGroup(group => group .Add("MicrosoftAjax.js") .Add("MicrosoftMvcAjax.js") ); %> <div class="MainTableBody"> <asp:ContentPlaceHolder ID="ContentPlaceHolder" runat="server" /> </div> <% Html.Telerik().ScriptRegistrar().Render(); %> </body> </html> In my web.config <add namespace="Telerik.Web.Mvc.UI" /> Any help or comments would be greatly appreciated

    Read the article

  • C# mvc2 client side form validation with xval, prevent post

    - by Rob
    I'm using xval to use client side validation in my asp.net mvc2 webapplication. Despite the errors it's giving when i enter text in a nummeric field it still tries to post the form to the database. The incorrect values are being replaced by 0 and saved to the database. But instead it shouldn't even be possible to try and submit the form. Can anyone help me out here? I've set the attributes as below; [Property] [ShowColumnInCrud(true, label = "FromPriceInCents")] [Required] //[Range(1, Int32.MaxValue)] public virtual Int32 FromPriceInCents{ get; set; } The controller catching the request looks as below; I'm getting no errors in this part. [AcceptVerbs(HttpVerbs.Post)] [Transaction] [ValidateInput(false)] public override ActionResult Create() { //some foo happens } My view looks like below; <div class="label"><label for="Price">FromPrice</label></div> <div class="field"> <%= Html.TextBox("FromPriceInCents")%> <%= Html.ValidationMessage("product.FromPriceInCents")%></div> And at the end of the view i have the following rule which in html code generates the correct validation rules <%= Html.ClientSideValidation<Product>("Product") %> I hope someone can helps me out with this issue, thanks in advance!

    Read the article

  • Update CGridView when a dropdown value changes

    - by Gautam Borad
    I have a CGridView with columns from a table "product" => {'product_id','category_id',...} I have another table "category" => {'category_id','category_name'} category_id is the FK in the product table. Now i want a dropdown list of the category table and on selecting a particular value the CGridView of product should be updated to show only the rows with that category_id. I also need the column filtering/sorting for the CGridView to work (using AJAX). I was able to refresh the CGridView when a value is selected from the dropdown, however i am not able to send the category_id with the 'data' for the CGridView: clientScript-registerScript('search', " $('.cat_dropdown').change(function(){ $.fn.yiiGridView.update('order-grid', { data: $(this).serialize(), }); return false; }); "); The data: $(this).serialize() sends only the values that are present in the filtering text fields of the CGridView. How do i append the category_id with it? If the above method is not the right one, please suggest an alternative method.

    Read the article

  • Complex ModelBinders and being in charge of creating part of the model

    - by Kieron
    Hi, I've a scenario where I need to bind to an interface - in order to create the correct type, I've got a custom model binder that knows how to create the correct concrete type (which can differ). However, the type created never has the fields correctly filled in. I know I'm missing something blindingly simple here, but can anyone tell me why or at least what I need to do for the model binder to carry on it's work and bind the properties? public class ProductModelBinder : DefaultModelBinder { override public object BindModel (ControllerContext controllerContext, ModelBindingContext bindingContext) { if (bindingContext.ModelType == typeof (IProduct)) { var content = GetProduct (bindingContext); return content; } var result = base.BindModel (controllerContext, bindingContext); return result; } IProduct GetProduct (ModelBindingContext bindingContext) { var idProvider = bindingContext.ValueProvider.GetValue ("Id"); var id = (Guid)idProvider.ConvertTo (typeof (Guid)); var repository = RepositoryFactory.GetRepository<IProductRepository> (); var product = repository.Get (id); return product; } } The Model in my case is a complex type that has an IProduct property, and it's those values I need filled in. Model: [ProductBinder] public class Edit : IProductModel { public Guid Id { get; set; } public byte[] Version { get; set; } public IProduct Product { get; set; } }

    Read the article

  • Linq - how does it work??

    - by clarkeyboy
    Hey, I have just been looking into Linq with ASP.Net. It is very neat indeed. I was just wondering - how do all the classes get populated? I mean in ASP.Net, suppose you have a Linq file called Catalogue, and you then use a For loop to loop through Catalogue.Products and print each Product name. How do the details get stored? Does it just go through the Products table on page load and create another instance of class Product for each row, effectively copying an entire table into an array of class Product? If so, I think I have created a system very much like this, in the sense that there is a SiteContent module with an instance of each Manager class - for example there is UserManager, ProductManager, SettingManager and alike. UserManager contains an instance of the User class for each row in the Users table. They also contain methods such as Create, Update and Remove. These Managers and their "Items" are created on every page load. This just makes it nice and easy to access users, products, settings etc in every page as far as I, the developer, am concerned. Any any subsequent pages I need to create, I just need to reference SiteContent.UserManager to access a list of users, rather than executing a query from within that page (ie this method separates out data access from the workings of the page, in the same way as using code behind separates out the workings of the page from how the page is layed out). However the problem is that this technique seems rather slow. I mean it is effectively creating a database on every page load, taking data from another database. I have taken measures such as preventing, for example, the ProductManager from being created if it is not referenced on page load. Therefore it does not load data into storage when it is not needed. My question is basically whether my technique does the exact same thing as Linq, in the sense of duplicating data from tables into properties of classes.. Thanks in advance for any advice or answers about this. Regards, Richard Clarke

    Read the article

  • issue in ObservableCollection

    - by prince23
    hi, i have an lsit with these data i have a class called information.cs with these properties name,school, parent ex data name school parent kumar fes All manju fes kumar anu frank kumar anitha jss All rohit frank manju anill vijaya manju vani jss kumar soumya jss kumar madhu jss rohit shiva jss rohit vanitha jss anitha anu jss anitha now taking this as an input i wanted the output to be formated with a Hierarchical data when parent is all means it is the topmost level kumar fes All. what i need to do here is i need to create an object[0] and then check in list whether kumar exists as a parent in the list if it exista then add those items as under the object[0] as a parent i need to create one more oject under **manju fes kumar anu frank kumar** if you see this class file its shows how data is formatted. public class SampleProjectData { public static ObservableCollection GetSampleData() { DateTime dtS = DateTime.Now; ObservableCollection<Product> teams = new ObservableCollection<Product>(); teams.Add(new Product() { PDName = "Product1", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3), }); Project emp = new Project() { PName = "Project1", OverallStartTime = dtS + TimeSpan.FromDays(1), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS, EndTime = dtS + TimeSpan.FromDays(2), TaskName = "John's Task 3" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(3), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "John's Task 2" }); teams[0].Projects.Add(emp); } return teams; }

    Read the article

  • .NET projects build automation with NAnt/MSBuild + SVN

    - by petr k.
    Hi everyone, for quite a while now, I've been trying to figure out how to setup an automated build process at our shop. I've read many posts and guides on this matter and none of them really fits my specifics needs. My SVN repository is laid out as follows \projects \projectA (a product) \tags \1.0.0.1 \1.0.0.2 ... \trunk \src \proj1 (a VS C# project) \proj2 \documentation Then I have a network share, with a folder for each project (product), which in turn contains the binaries, written documentation and the generated API documentation (via NDoc - each project may have an .ndoc file in the repository) for every historical version (from the tags SVN folder) and for the latest version as well (from the trunk). Basically, what I want to do in a scheduled batch build are these steps: examine the project's SVN folder and identify tags not present in the network share for each of these tags check out the tag folder build (with Release config) copy the resulting binaries to the network share search for .ndoc files generate CHM files via NDoc copy the resulting CHM files to the network share do the same as in 2., but for the HEAD revision of trunk Now, the trouble is, I have no idea where to start. I do not keep .sln files in the repository, but I am able to replace these with MSBuild files which in turn build the C# projects belonging to the specific product. I guess the most troubling part is the examination of the repository for tags which have not been processed yet - i.e. searching the tags and comparing them to a project's directory structure on the network share. I have no idea how to do that in any of the build tools (NAnt, MSBuild). Could you please provide me with some pointers on how to approach this task as a whole and in detail as well? I do not care if I use NAnt, MSBuild, or both. I am aware that this might be rather complex, but every idea and NAnt/MSBuild snippet will be a great help. Thanks in advance.

    Read the article

  • Is an LSA MSV1_0 subauthentication package needed for some impersonation use cases?

    - by Chris Sears
    Greetings, I'm working with a vendor who has implemented some code that uses a Windows LSA MSV1_0 subauthentication package (MSDN info if you're interested: http://msdn.microsoft.com/en-us/library/aa374786(VS.85).aspx ) and I'm trying to figure out if it's necessary. As far as I can tell, the subauthentication routine and filter allow for hooking or customizing the standard LSA MSV1_0 logon event processing. The issue is that I don't understand why the vendor's product would need these capabilities. I've asked them and they said they use it to perform impersonation. The product definitely does need to do impersonation, but based on my limited win32 knowledge, they could get the functionality they need using the normal auth APIs (LsaLogonUser, ImpersonateLoggedOnUser, etc) without the subauthentication package. Furthermore, I've worked with a number of similar products that all do impersonation, and this is the only one that's used a subauthentication package. If you're wondering why I would care, a previous version of the product had a bug in the subauthentication package dll that would cause lockups or bluescreens. That makes me rather nervous and has me questioning the use of such a low-level, kernel sensitive interface. I'd like to go back to the vendor and say "There's no way you could need an LSA subauth package for impersonation - take it out", but I'm not sure I understand the use cases and possible limitations of the standard win32 authentication/impersonation APIs well enough to make that claim definitively. So, to the win32 security gurus out there, is there any reason you would need an LSA MSV1_0 subauthentication package if all you were doing is impersonation? Thanks in advance for any thoughts!

    Read the article

  • REST API Best practice: How to accept as input a list of parameter values

    - by whatupwilly
    Hi All, We are launching a new REST API and I wanted some community input on best practices around how we should have input parameters formatted: Right now, our API is very JSON-centric (only returns JSON). The debate of whether we want/need to return XML is a separate issue. As our API output is JSON centric, we have been going down a path where our inputs are a bit JSON centric and I've been thinking that may be convenient for some but weird in general. For example, to get a few product details where multiple products can be pulled at once we currently have: http://our.api.com/Product?id=["101404","7267261"] Should we simplify this as: http://our.api.com/Product?id=101404,7267261 Or is having JSON input handy? More of a pain? We may want to accept both styles but does that flexibility actually cause more confusion and head aches (maintainability, documentation, etc.)? A more complex case is when we want to offer more complex inputs. For example, if we want to allow multiple filters on search: http://our.api.com/Search?term=pumas&filters={"productType":["Clothing","Bags"],"color":["Black","Red"]} We don't necessarily want to put the filter types (e.g. productType and color) as request names like this: http://our.api.com/Search?term=pumas&productType=["Clothing","Bags"]&color=["Black","Red"] Because we wanted to group all filter input together. In the end, does this really matter? It may be likely that there are so many JSON utils out there that the input type just doesn't matter that much. I know our javascript clients making AJAX calls to the API may appreciate the JSON inputs to make their life easier. Thanks, Will

    Read the article

  • Crystal Reports .Net Guidance

    - by Ken Ray
    We have been using .Net and Visual Studio for the last six years, and early on developed a number of web based reporting applications using the .Net version of Crystal Reports that was bundled with Visual Studio. My overall opinion of that product has been, to say the least, rather unimpressed. It seemed to be incredibly difficult and convoluted to use, we had to make security changes, install various extra software, and so on. Now, we are moving to VS2008 and version 3.5 of the .Net framework, and the time has come to redevelop some of these old applications. The developers who used (and somehow mastered) Crystal .Net have long gone, and I am facing a decision - do we stick with Crystal Reports or move to something else. We also have the "full" version of Crystal Reports XI at our disposal. The way we use the product is to product pdf versions of data extracted from various databases. While some apps use the inbuilt Crystal Reports viewer as well, this seems to be redundant now with the flexibility of grid views - but there is still the need to produce a pdf version of the data in teh grid for printing, or in Excel format to download. What is the concensus? Is Crystal Reports .Net worth persisting with, or should we work out how to use version XI? Alternatively, is there a simple and low cost way to generate pdf reports without using Crystal? What good sources of "how to" information have others found and recommend? Are there suitable books, designed for VS2008 / .Net 3.5 development that you have used and found of benefit? Thanks in advance.

    Read the article

  • Vote on Pros and Cons of Java HTML to XML cleaners

    - by George Bailey
    I am looking to allow HTML emails (and other HTML uploads) without letting in scripts and stuff. I plan to have a white list of safe tags and attributes as well as a whitelist of CSS tags and value regexes (to prevent automatic return receipt). I asked a question: Parse a badly formatted XML document (like an HTML file) I found there are many many ways to do this. Some systems have built in sanitizers (which I don't care so much about). This page is a very nice listing page but I get kinda lost http://java-source.net/open-source/html-parsers It is very important that the parsers never throw an exception. There should always be best guess results to the parse/clean. It is also very important that the result is valid XML that can be traversed in Java. I posted some product information and said Community Wiki. Please post any other product suggestions you like and say Community Wiki so they can be voted on. Also any comments or wiki edits on what part of a certain product is better and what is not would be greatly appreciated. (for example,, speed vs accuracy..) It seems that we will go with either jsoup (seems more active and up to date) or TagSoup (compatible with JDK4 and been around awhile). A +1 for any of these products would be if they could convert all style sheets into inline style on the elements.

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >