Search Results

Search found 1487 results on 60 pages for 'crazy eddie'.

Page 10/60 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • What's happening in Red Gate's .NET Developer Tools division?

    .NET 4.0, Silverlight 4, F# decompilation in .NET Reflector, our crazy shipping schedule, and some prize draw winners. Yes, with a list of topics that broad, it can only be another update on what's happening in Red Gate's .NET Developer Tools division....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • problem with grafics on ubuntu 12.10

    - by kargeo
    please help I just install 12.10 x86_64 to my new computer amd fx8120 cpu ati hd 7770 msi 990 fxa and I'm facing the following error: 2012-11-17 18:19:04,393 WARNING: /sys/module/fglrx/drivers does not exist, cannot rebind fglrx driver From :additional drivers" can not install the ati drivers From "system settings" "software sources" only the x.org server is working but with problems Compiz works continuously and if I open for example a video the fun is running like crazy The mouse pointer goes slow The windows act strange (flashing, don't open, open slowly etc) What should I do? Thanks

    Read the article

  • What is the Future of Search Engine Optimisation?

    Though those who are into Internet marketing would like to know what the future holds for them, but frankly, it is very difficult to predict this accurately. Forget about the future of SEO, actually it is very difficult to even predict the future of Internet and computers in general. For instance, if 40 years back anyone had predicted that a computer would be sitting on a table of almost every home in the country, then everyone would have thought that he or she was crazy.

    Read the article

  • The Linux Desktop isn't Dead, it's Pining

    <b>DaniWeb: </b>"I know it sounds crazy but the Linux Desktop isn't dead, it's just pining. It's pining for the correct platform--a tablet computer. And, I'm not referring to some cheap imitation tablet that will merely satisfy a few observers and nerdlets who use Linux. I'm thinking of a tablet computer for hardcore Linux moguls."

    Read the article

  • How to remove installation like SET ,metasploit

    - by Renado
    i have installed into me laptop dual boot windows 7 and ubuntu 13.04 and i have install into ubuntu SET form www.trustedsec.com .it create direcotry set but when i try to start set it dosent start .when i type ./set it get error or ./set is a direcory can you give me a solution how to install it step by step or how to remove set for my pc. a second problem is when i install aircrack-ng when i type airmon-ng on terminal my desktop go crazy and i must restart manyally form my power button. Thanks

    Read the article

  • Planning Your Website Optimization

    It may sound crazy but the very first thing you need to do when planning your website optimization has absolutely nothing to do with your website or the internet. No, the first thing is to know is...

    Read the article

  • Linux: prevent VNC from swapping like mad

    - by Weezy
    I'm accessing a MacMini (with MacOS X 10.4) from my Linux machine using VNC and there's an issue that is driving me crazy... My Linux machine has 4 GB of ram and I run a lot of various apps on it and I've got no issue at all. It's all snappy and don't hear the hard disk swapping/read/writing too often. Now with VNC, the hard disk is swapping like mad... When I'm moving things on the OS X desktop. So I was thinking of creating a ramdisk and forcing the temp VNC files to go into that ramdisk but the problem is I can't find any temp files. I've attempted to do that: #!/bin/bash while [ true ] do lsof | grep vnc done And eyeball parse the output to try to find some temp file: no luck. The VNC version I'm using is this one: $ vncviewer -version VNC Viewer Free Edition 4.1.1 for X - built Jan 30 2009 19:33:16 Copyright (C) 2002-2005 RealVNC Ltd. No matter how much data is coming from the Mac, there should be plenty of memory (4 GB of ram) so there's really no reason to swap like crazy. This is driving me mad. Any help as to how I could solve this problem is most welcome because this is literally driving me nuts.

    Read the article

  • Building an Email server for mass emails

    - by EGHDK
    I recently started doing IT odd jobs for a company. The company has a pretty decent sized mail list that costs them over $3000 per month to send out email from. The company is set on creating their own email server so that it can just run and send emails to the client lists. They only send out emails roughly once a month. Has anyone had experience with this? This wouldn't be an email server I guess (as it doesn't need to handle incoming messages) It just has to be able to send around 200,000 emails, once a month. What would be the best way to go about this? Services online like MailChimp have proved to be too pricey. It's not an ad that is being sent out, it's more of a monthly newsletter, so we don't need any crazy software for ROI or anything crazy like that. If I could fit 200,000 people in GMAIL, I'd do it, but I don't think I can (heh... maybe I should try).

    Read the article

  • Building an Email server for mass emails

    - by EGHDK
    I recently started doing IT odd jobs for a company. The company has a pretty decent sized mail list that costs them over $3000 per month to send out email from. The company is set on creating their own email server so that it can just run and send emails to the client lists. They only send out emails roughly once a month. Has anyone had experience with this? This wouldn't be an email server I guess (as it doesn't need to handle incoming messages) It just has to be able to send around 200,000 emails, once a month. What would be the best way to go about this? Services online like MailChimp have proved to be too pricey. It's not an ad that is being sent out, it's more of a monthly newsletter, so we don't need any crazy software for ROI or anything crazy like that. If I could fit 200,000 people in GMAIL, I'd do it, but I don't think I can (heh... maybe I should try).

    Read the article

  • Is reliability reputation of mechanical keyboards overblown?

    - by Rarst
    A while back I worked up to finally buying mechanical keyboard (~$100 range, "black" switches) and was initially quite content with purchase. However just outside first year (read it - as soon as warranty expired) it started to develop repeat issues (press once, get chain of letter repeated) on multiple keys. It doesn't react to generic cleaning (up to compressed air) and searching Internet shows noticeable amount of people with similar-to-identical issues, spanning years. This makes me severely hesitant to buy another mechanical keyboard, considering: every other keyboard I ever owned, including ultra-cheap crap managed to last longer than that typing experience is nice, but not lifechanging-fan-forever nice for me my choice of mechanical keyboards is severely limited not many brands represented in local market and primarily crazy looking gamer models russian (not to mention russian and ukrainian if possible) layout excludes international ordering price tag for a meek year of use I got our of it is plain demoralizing It is obvious mechanical keyboards have their fans, but shopping around for "best fit" or getting into multiple hundreds price tags is probably not something I am highly interested in. Considering my constraints and bad experience with reliability, is it practical for me to sink more money into buying mechanical keyboard(s) again? In other words - manufacturers are beaming about how crazy reliable mechanical keyboards are. Are active long time users of such keyboards confidently of same opinion?

    Read the article

  • How to implement a virtual server running Ubuntu inside a fileserver in windows?

    - by user541445
    I work in a company that has some limitations regarding their budget. They need client/server aplication. I can code the aplication, I've made mini tests on primordial applications that work. The thing is that they only have fileservers, and the application they need must be concurrent compliant, so the database must be in their local network (Fileserver is the only choice). So far, I have explored almost any option available, starting with: Desktop databases. Access (We have a license)(But concurreny is just not effective, besides it's a windows software, yuck). Sqlite (Nice, but since the information they manage is a lot, I've performed some concurrency tests with INSERTs and doing SELECTS at the same time). It fails, somehow it just stops inserting. Open office Base (I dismember a base office only to see that it was a file mode HSQLDB). I've tried, not concurrent at all. Etc (You name the Open source Desktop Database manager, and yes, I've tried that one.) Server databases Call me a stubborn person, but reading some server databases documentation say that their databases will work in a file mode. I've tried a lot of products. Postgres, MySQL, FireBird, H2, Derby, Oracle Express, IBM DB2 Express, etc. So, I really need a hand here, I've been doing this install/delete/depression thing with a lot of databases for 3 weeks, until I got with a crazy idea and I just came here to express it. So, my question comes down to this: Installing a light virtual OS software like Virtual PC and then installing in there a Server OS like an Ubuntu server inside that virtual software will work? Will it work 24/7 or when I close the virtual pc software? Will this work in a fileserver? Any suggestion, answer, critic to the place I work, crazy new concurrent database that will work in fileserver will be most than welcome.

    Read the article

  • Lightning talk: Coderetreat

    - by Michael Williamson
    In the spirit of trying to encourage more deliberate practice amongst coders in Red Gate, Lauri Pesonen had the idea of running a coderetreat in Red Gate. Lauri and I ran the first one a few weeks ago: given that neither of us hadn’t even been to a coderetreat before, let alone run one, I think it turned out quite well. The participants gave positive feedback, saying that they enjoyed the day, wrote some thought-provoking code and would do it again. Sam Blackburn was one of the attendees, and gave a lightning talk to the other developers in one of our regular lightning talk sessions: In case you can’t watch the video, I’ve transcribed the talk below, although I’d recommend watching the video if you can — I didn’t have much time to do the transcribing! So, what is a coderetreat? So it’s not just something in Red Gate, there’s a website and everything, although it’s not a very big website. It calls itself a community network. The basic ideas behind coderetreat are: you’ve got one day, and you split it into one hour sections. You spend three quarters of that coding, and do a little retrospective at the end. You’re supposed to start fresh each, we were told to delete our code after every session. We were in pairs, swapping after each session, and we did the same task every time. In fact, Conway’s Game of Life is the only task mentioned anywhere that I find for coderetreat. So I don’t know what we’ll do next time, or if we’re meant to do the same thing again. There are some guiding principles which felt to us like restrictions, that you have to code in crazy ways to encourage better code. Final thing is that it’s supposed to be free for outsiders to join. It’s meant to be a kind of networking thing, where you link up with people from other companies. We had a pilot day with Michael and Lauri. Since it was basically the first time any of us had done anything like this, everybody was from Red Gate. We didn’t chat to anybody else for the initial one. The task was Conway’s Game of Life, which most of you have probably heard of it, all but one of us knew about it when did the coderetreat. I won’t got into the details of what it is, but it felt like the right size of task, basically one or two groups actually produced something working by the end of the day, and of course that doesn’t mean it’s necessarily a day’s work to produce that because we were starting again every hour. The task really drives you more than trying to create good code, I found. It was really tempting to try and get it working rather than stick to the rules. But it’s really good to stop and try again because there are so many what-ifs when you’ve finished writing something, “what if I’d done it this way?”. You can answer all those questions at a coderetreat because it’s not about getting a product out the door, it’s about learning and playing with ideas. So we had all these different practices we were trying. I’ll try and go through most of these. Single responsibility is this idea that everything should do just one thing. It was the very first session, we were still trying to figure out how do you go about the Game of Life? So by the end of forty-five minutes hadn’t produced very much for that first session. We were still thinking, “Do we start with a board, how do we represent all these squares? It can be infinitely big, help, this is getting really difficult!”. So, most of us didn’t really get anywhere on the first one. Although it was interesting that some people started with the board, one group started with the FateDecider class that decides whether things live or die. A sort of god class, but in a good way. They managed to implement all of the rules without even defining how the squares were arranged or anything like that. Another thing we tried was TDD (test-driven development). I’m sure most of you know what TDD is: Watch a test, watch it fail for the right reason Write code to pass the test, watch it pass Refactor, check the test still passes Repeat! It basically worked, we were able to produce code, but we often found the tests defined the direction that code went, which is obviously the idea of TDD. But you tend to find that by the time you’ve even written your first assertion, which is supposed to be the very first thing you write, because you write your tests backwards from the assertions back to the initial conditions, you’ve already constrained the logic of the code in some way by the time you’ve done that. You then get to this situation of, “Well, we actually want to go in a slightly different direction. Can we do this?”. Can we write tests that don’t constrain the architecture? Wrapping up all primitives: it’s kind of turtles all the way down. We had a Size, which has a Width and Height, which both derive from Dimension. You’ve got pages of code before you’ve even done anything. No getters and setters (use tell don’t ask instead): mocks and stubs for tests are required if you want to assert that your results are what you think they should be. You can’t just check the internal state of the code. And people found that really challenging and it made them think in a different way which I think is really good. Not having mutable state: that was kind of confusing because we weren’t quite sure what fitted within that rule and what didn’t, and I think we were trying too hard to follow the rule rather than the guideline. No if-statements: supposed to use polymorphism instead, but polymorphism still requires a factory with conditional behaviour. We did something really crazy to get around this: public T If(bool condition, Func<T> left, Func<T> right) { var dict = new Dictionary<bool, Func<T>> {{true, left}, {false, right}}; return dict[condition].Invoke(); } That is not really polymorphism, is it? For-loops: you can always replace a for-loop with recursion, but it doesn’t tend to make it any more readable unless it’s the kind of task that really lends itself to that. So it was interesting, it was good practice, but it wouldn’t make it easier it’s the kind of tree-structure algorithm where that would help. Having a limit on the number of levels of indentation: again, I think it does produce very nice, clean code, but it wasn’t actually a challenge because you just extract methods. That’s quite a useful thing because you can apply that to real code and say, “Okay, should this method really be going crazy like this?” No talking: we hated that. It’s like there’s two of you at a computer, and one of you is doing the typing, what does the other guy do if they’re not allowed to talk. The answer is TDD ping-pong – one person writes the tests, and then the other person writes the code to pass the test. And that creates communication without actually having to have discussion about things which is kind of cool. No code comments: just makes no difference to anything. It’s a forty-five minute exercise, so what are you going to put comments in code for? Finally, this is my fault. I discovered an entertaining way of doing the calculation that was kind of cool (using convolutions over the state of the board). Unfortunately, it turns out to be really hard to implement in C#, so didn’t even manage to work out how to do that convolution in C#. It’s trivial in some high-level languages, but you need something matrix-orientated for it to really work. That’s most of it, really. The thoughts that people went away with: we put down our answers to questions like “What have you learnt?” and “What surprised you?”, “How are you going to do things differently?”, and most people said redoing the problem is really, really good for understanding it properly. People hate having a massive legacy codebase that they can’t change, so being able to attack something three different ways in an environment where the end-product isn’t important: that’s something people really enjoyed. Pair-programming: also people said that they wanted to do more of that, especially with TDD ping-pong, where you write the test and somebody else writes the code. Various people thought different things about immutables, but most people thought they were good, they promote functional programming. And TDD people found really hard. “Tell, don’t ask” people found really, really hard and really, really, really hard to do well. And the recursion just made things trickier to debug. But most people agreed that coderetreats are really cool, and we should do more of them.

    Read the article

  • 11 Types of Developers

    - by Lee Brandt
    Jack Dawson Jack Dawson is the homeless drifter in Titanic. At one point in the movie he says, “I figure life’s a gift, and I don’t intend on wasting it.” He is happy to wander wherever life takes him. He works himself from place to place, making just enough money to make it to his next adventure. The “Jack Dawson” developer clings on to any new technology as the ‘next big thing’, and will find ways to shoe-horn it in to places where it is not a fit. He is very appealing to the other developers because they want to try the newest techniques and tools too, He will only stay until the new technology either bores him or becomes problematic. Jack will also be hard to find once the technology has been implemented, because he will be on to the next shiny thing. However, having a Jack Dawson on your team can be beneficial. Jack can be a great ally when attempting to convince a stodgy, corporate entity to upgrade. Jack usually has an encyclopedic recall of all the new features of the technology upgrade and is more than happy to interject them in any conversation. Tom Smykowski Tom is the neurotic employee in Office Space, and is deathly afraid of being fired. He will do only what is necessary to keep the status quo. He believes as long as nothing changes, his job is safe. He will scoff at anything new and be the naysayer during any change initiative. Tom can be useful in off-setting Jack Dawson. Jack will constantly be pushing for change and Tom will constantly be fighting it. When you see that Jack is getting kind of bored with a new technology and Tom has finally stopped wetting himself at the mere mention of it, then it is probably the sweet spot of beginning to implement that new technology (providing it is the right tool for the job). Ray Consella Ray is the guy who built the Field of Dreams. He took a risk. Sometimes he screwed it up, but he knew he didn’t want to end up regretting not attempting it. He constantly doubted himself, but he knew he had to keep going. Granted, he was doing what the voices in his head were telling him to do, but my point is he was driven to do something that most people considered crazy. Even when his friends, his wife and even he told himself he was crazy, somewhere inside himself, he knew it was the right thing to do. These are the innovators. These are the Bill Gates and Steve Jobs of the world. The take risks, they fail, they learn and the get better. Obviously, this kind of person thrives in start-ups and smaller companies, but that is due to their natural aversion to bureaucracy. They want to see their ideas put into motion quickly, and withdrawn quickly if it doesn’t work. Short feedback cycles are essential to Ray. He wants to know if his idea is working or not. He wants to modify or reverse his idea if it is not working or makes things worse. These are the agilistas. May I always be one.

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

  • Approach for packing 2D shapes while minimizing total enclosing area

    - by Dennis
    Not sure on my tags for this question, but in short .... I need to solve a problem of packing industrial parts into crates while minimizing total containing area. These parts are motors, or pumps, or custom-made components, and they have quite unusual shapes. For some, it may be possible to assume that a part === rectangular cuboid, but some are not so simple, i.e. they assume a shape more of that of a hammer or letter T. With those, (assuming 2D shape), by alternating direction of top & bottom, one can pack more objects into the same space, than if all tops were in the same direction. Crude example below with letter "T"-shaped parts: ***** xxxxx ***** x ***** *** ooo * x vs * x vs * x vs * x o * x * xxxxx * x * x o xxxxx xxx Right now we are solving the problem by something like this: using CAD software, make actual models of how things fit in crate boxes make estimates of actual crate dimensions & write them into Excel file (1) is crazy amount of work and as the result we have just a limited amount of possible entries in (2), the Excel file. The good things is that programming this is relatively easy. Given a combination of products to go into crates, we do a lookup, and if entry exists in the Excel (or Database), we bring it out. If it doesn't, we say "sorry, no data!". I don't necessarily want to go full force on making up some crazy algorithm that given geometrical part description can align, rotate, and figure out best part packing into a crate, given its shape, but maybe I do.. Question Well, here is my question: assuming that I can represent my parts as 2D (to be determined how), and that some parts look like letter T, and some parts look like rectangles, which algorithm can I use to give me a good estimate on the dimensions of the encompassing area, while ensuring that the parts are packed in a minimal possible area, to minimize crating/shipping costs? Are there approximation algorithms? Seeing how this can get complex, is there an existing library I could use? My thought / Approach My naive approach would be to define a way to describe position of parts, and place the first part, compute total enclosing area & dimensions. Then place 2nd part in 0 degree orientation, repeat, place it at 180 degree orientation, repeat (for my case I don't think 90 degree rotations will be meaningful due to long lengths of parts). Proceed using brute force "tacking on" other parts to the enclosing area until all parts are processed. I may have to shift some parts a tad (see 3rd pictorial example above with letters T). This adds a layer of 2D complexity rather than 1D. I am not sure how to approach this. One idea I have is genetic algorithms, but I think those will take up too much processing power and time. I will need to look out for shape collisions, as well as adding extra padding space, since we are talking about real parts with irregularities rather than perfect imaginary blocks. I'm afraid this can get geometrically messy fairly fast, and I'd rather keep things simple, if I can. But what if the best (practical) solution is to pack things into different crate boxes rather than just one? This can get a bit more tricky. There is human element involved as well, i.e. like parts can go into same box and are thus a constraint to be considered. Some parts that are not the same are sometimes grouped together for shipping and can be considered as a common grouped item. Sometimes customers want things shipped their way, which adds human element to constraints. so there will have to be some customization.

    Read the article

  • System.EnterpriseServices.Wrapper.dll error

    - by Elaine
    this has driven me crazy, Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load file or assembly 'System.EnterpriseServices.Wrapper.dll' or one of its dependencies. (Exception from HRESULT: 0x800700C1) Source Error: Line 1: <%@ Application Codebehind="Global.asax.cs" Inherits="PMP.MvcApplication" Language="C#" % Yesterday, I shut up my WIN7, an windows update was pending there without any process for nearly one hour, then I shut my laptop . and when i re-opened my win7 and ran the PMP MVC application, this error occured. I finished that pending windows update. but no help. I googed to find that should re-install .net framework 1.1/2.1, i tried but nothing good happend, this error always here. I even spent 4 hrs to re-install vs2010, but still cannot solve this. If i need to re-install WIN7 for this , i will be crazy at all... How this happened? and how to overcome it? Thanks for your time..

    Read the article

  • Cassandra random read speed

    - by Jody Powlette
    We're still evaluating Cassandra for our data store. As a very simple test, I inserted a value for 4 columns into the Keyspace1/Standard1 column family on my local machine amounting to about 100 bytes of data. Then I read it back as fast as I could by row key. I can read it back at 160,000/second. Great. Then I put in a million similar records all with keys in the form of X.Y where X in (1..10) and Y in (1..100,000) and I queried for a random record. Performance fell to 26,000 queries per second. This is still well above the number of queries we need to support (about 1,500/sec) Finally I put ten million records in from 1.1 up through 10.1000000 and randomly queried for one of the 10 million records. Performance is abysmal at 60 queries per second and my disk is thrashing around like crazy. I also verified that if I ask for a subset of the data, say the 1,000 records between 3,000,000 and 3,001,000, it returns slowly at first and then as they cache, it speeds right up to 20,000 queries per second and my disk stops going crazy. I've read all over that people are storing billions of records in Cassandra and fetching them at 5-6k per second, but I can't get anywhere near that with only 10mil records. Any idea what I'm doing wrong? Is there some setting I need to change from the defaults? I'm on an overclocked Core i7 box with 6gigs of ram so I don't think it's the machine. Here's my code to fetch records which I'm spawning into 8 threads to ask for one value from one column via row key: ColumnPath cp = new ColumnPath(); cp.Column_family = "Standard1"; cp.Column = utf8Encoding.GetBytes("site"); string key = (1+sRand.Next(9)) + "." + (1+sRand.Next(1000000)); ColumnOrSuperColumn logline = client.get("Keyspace1", key, cp, ConsistencyLevel.ONE); Thanks for any insights

    Read the article

  • Uploading Binary iPhone App "The signature was invalid" again again and again...

    - by user338386
    Hello! I'm going crazy! I'm trying to upload the binary of my first application but I have always the same error! "The binary you uploaded was invalid. The signature was invalid, or it was not signed with an Apple submission certificate." I did everything, EVERYTHING!! I created the request for the certificate, used it for both developer and distribution certificate, created the provisioning profile (12 times!!!) always cleaning my keychain and my Xcode deleting the old certificates and profiles.. I reboot the machine, reboot Xcode, the log is correct, but... I can't upload my app!!!! Checked if my iPhone is connected (i tried with iPhone disconneted too). I checked the certificate in both my project settings "Distribuition" Configuration (duplicate of "Release" configuration) and in my target settings. Reveal in finder, compress the app and sent the zip... I tried with Application Loader and iTunes connect online.. but nothing! NOTHING!! I've spent 8 hours! And again i can't have my app uploaded!!! I'm really going crazy! Can anyone help me pleeease? Thx!

    Read the article

  • useer degined Copy ctor, and copy-ctors further down the chain - compiler bug ? programers brainbug

    - by J.Colmsee
    Hi. i have a little problem, and I am not sure if it's a compiler bug, or stupidity on my side. I have this struct : struct BulletFXData { int time_next_fx_counter; int next_fx_steps; Particle particles[2];//this is the interesting one ParticleManager::ParticleId particle_id[2]; }; The member "Particle particles[2]" has a self-made kind of smart-ptr in it (resource-counted texture-class). this smart-pointer has a default constructor, that initializes to the ptr to 0 (but that is not important) I also have another struct, containing the BulletFXData struct : struct BulletFX { BulletFXData data; BulletFXRenderFunPtr render_fun_ptr; BulletFXUpdateFunPtr update_fun_ptr; BulletFXExplosionFunPtr explode_fun_ptr; BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr; BulletFX( BulletFXData data, BulletFXRenderFunPtr render_fun_ptr, BulletFXUpdateFunPtr update_fun_ptr, BulletFXExplosionFunPtr explode_fun_ptr, BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr) :data(data), render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } /* //USER DEFINED copy-ctor. if it's defined things go crazy BulletFX(const BulletFX& rhs) :data(data),//this line of code seems to do a plain memory-copy without calling the right ctors render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } */ }; If i use the user-defined copy-ctor my smart-pointer class goes crazy, and it seems that calling the CopyCtor / assignment operator aren't called as they should. So - does this all make sense ? it seems as if my own copy-ctor of struct BulletFX should do exactly what the compiler-generated would, but it seems to forget to call the right constructors down the chain. compiler bug ? me being stupid ? Sorry about the big code, some small example could have illustrated too. but often you guys ask for the real code, so well - here it is :D

    Read the article

  • Javascipt Regular Expression

    - by Ghoul Fool
    Having problems with regular expressions in JavaScript. I've got a number of strings that need delimiting by commas. Unfortunately the sub strings don't have quotes around them which would make life easier. var str1 = "Three Blind Mice 13 Agents of Cheese Super 18" var str2 = "An Old Woman Who Lived in a Shoe 7 Pixies None 12" var str3 = "The Cow Jumped Over The Moon 21 Crazy Cow Tales Wonderful 9" They are in the form of PHRASE1 (Mixed type with spaces") INTEGER1 (1 or two digit) PHRASE2 (Mixed type with spaces") WORD1 (single word mixed type, no spaces) INTEGER2 (1 or two digit) so I should get: result1 = "Three Blind Mice, 13, Agents of Cheese, Super, 18" result2 = "An Old Woman Who Lived in a Shoe, 7, Pixies, None, 12" result3 = "A Cow Jumped Over The Moon, 21, Crazy Cow Tales, Wonderful, 9" I've looked at txt2re.com, but can't quite get what I need and ended up delimiting by hand. But I'm sure it can be done, albeit someone with a bigger brain. There are lots of examples of regEx but I couldn't find any to deal with phrases; so I was wondering if anyone could help me out. Thank you.

    Read the article

  • Handling very large lists of objects without paging?

    - by user246114
    Hi, I have a class which can contain many small elements in a list. Looks like: public class Farm { private ArrayList<Horse> mHorses; } just wondering what will happen if the mHorses array grew to something crazy like 15,000 elements. I'm assuming that trying to write and read this from the datastore would be crazy, because I'd get killed on the serialization process. It's important that I can get the entire array in one shot without paging, and each Horse element may only have two string properties in it, so they are pretty lightweight: public class Horse { private String mId; private String mName; } I don't need these horses indexed at all. Does it sound reasonable to just store the mHorse array as a raw Text field, and force my clients to do the deserialization? Something like: public class Farm { private Text mHorsesSerialized; } then whenever the client receives a Farm instance, it has to take the raw string of horses, and split it in order to reinstantiate the list, something like: // GWT client perhaps Farm farm = rpcCall.getMyFarm(); String horsesSerialized = farm.getHorses(); String[] horseBlocks = horsesSerialized.split(","); for (int i = 0; i < horseBlocks.length; i++) { // .. continue deserializing the individual objects ... } yeah... so hopefully it would be quick to read a Farm instance from the datastore, and the serialization penalty is paid by the client, Thanks

    Read the article

  • “compute launch button tooltip” error in eclipse

    - by Brahms
    I'm trying to run Android SDK with Eclipse for the first time. I have never used Eclipse before and I'm running into the following error message over and over again, with no specific trigger: "compute launch button tooltip has encountered a problem" I tried to Google it but I can't find a solution. It's driving me crazy, please help. I'm using Ubuntu 12.04 LTS. I tried re-installing Eclipse, same thing.

    Read the article

  • How can I quickly zoom in on the Mac OS X version of Word without having to use the menu?

    - by Lloyd
    (I'm using the Mac version MS Word 2011) I used to happily use the wheel mouse to zoom but, after upgrading to the Mac Magic mouse (using only finger slide movement to scroll and pan) I can no longer hold Ctrl and roll the mouse to zoom (driving me crazy) and I can't see a useful keyboard shortcut and the zoom slider bar in the lower right of the Word screen isn't practical (in my experience). Is there any way to zoom in on the Mac Version of Microsoft Word 2011 without resorting to using a menu?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >