Search Results

Search found 25660 results on 1027 pages for 'booting issue'.

Page 745/1027 | < Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >

  • Can't save data for a member in a data form

    - by RahulS
    Implied sharing is an old thing everyone knows the reasons and solutions of that, still little theory about that: With Essbase implied sharing, some members are shared even if you do not explicitly set them as shared. These members are implied shared members. When an implied share relationship is created, each implied member assumes the other member’s value. Essbase assumes (or implies) a shared member relationship in these situations: 1. A parent has only one child 2. A parent has only one child that consolidates to the parent In a Planning form that contains members with an implied sharing relationship, when a value is added for the parent, the child assumes the same value after the form is saved. Likewise, if a value is added for the child, the parent usually assumes the same value after a form is saved.For example, when a calculation script or load rule populates an implied share member, the other implied share member assumes the value of the member populated by the calculation script or load rule. The last value calculated or imported takes precedence. The result is the same whether you refer to the parent or the child as a variable in a calculation script. For more information have a look at: http://docs.oracle.com/cd/E17236_01/epm.1112/hp_admin_11122/ch14s11.html Now the issue which we are going to talk about is We loose data on save even when the parent is dynamic calc and has a single child. A dynamic calc parent to a single child:  If we design the form with following selection: In the data form we will find parent below the member and this is by design whenever you make a selection using commands to select all the member below parent, always children will appear before the parent: Lets try to enter data, Save it Now, try to change the way we selected members Here we go: Now the question again why this behavior: 1. Data from Planning data form passes to Essbase row by row, 2. Because in data form the child member appears before the parent, 3. First, data goes to Essbase for child (SingleStoreChild), 4. Then when Planning passes the data for parent there was #Missing or No data,  5. Over writes the data to #missing. PS: As we know that dynamic calc members are calculated on the fly they are not allocated with any memory in the Essbase, here the parent was dynamic calc and it was pointing to same memory as child in the background, when Planning was passing data to Essbase for second row it has updated the child with missing data.(Little confusing, let me know if you need more explanation) 6. As one of the solutions just change the order of appearance of parent and child. Cheers..!!! Rahul S. https://www.facebook.com/pages/HyperionPlanning/117320818374228

    Read the article

  • Why is Double.Parse so slow?

    - by alexhildyard
    I was recently investigating a bottleneck in one of my applications, which read a CSV file from disk using a TextReader a line at a time, split the tokens, called Double.Parse on each one, then shunted the results into an object list. I was surprised to find it was actually the Double.Parse which seemed to be taking up most of the time.Googling turned up this, which is a little unfocused in places but throws out some excellent ideas:It makes more sense to work with binary format directly, rather than coerce strings into doublesThere is a significant performance improvement in composing doubles directly from the byte stream via long intermediariesString.Split is inefficient on fixed length recordsIn fact it turned out that my problem was more insidious and also more mundane -- a simple case of bad data in, bad data out. Since I had been serialising my Doubles as strings, when I inadvertently divided by zero and produced a "NaN", this of course was serialised as well without error. And because I was reading in using Double.Parse, these "NaN" fields were also (correctly) populating real Double objects without error. The issue is that Double.Parse("NaN") is incredibly slow. In fact, it is of the order of 2000x slower than parsing a valid double. For example, the code below gave me results of 357ms to parse 1000 NaNs, versus 15ms to parse 100,000 valid doubles.            const int invalid_iterations = 1000;            const int valid_iterations = invalid_iterations * 100;            const string invalid_string = "NaN";            const string valid_string = "3.14159265";            DateTime start = DateTime.Now;                        for (int i = 0; i < invalid_iterations; i++)            {                double invalid_double = Double.Parse(invalid_string);            }            Console.WriteLine(String.Format("{0} iterations of invalid double, time taken (ms): {1}",                invalid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            ));            start = DateTime.Now;            for (int i = 0; i < valid_iterations; i++)            {                double valid_double = Double.Parse(valid_string);            }            Console.WriteLine(String.Format("{0} iterations of valid double, time taken (ms): {1}",                valid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            )); I think the moral is to look at the context -- specifically the data -- as well as the code itself. Once I had corrected my data, the performance of Double.Parse was perfectly acceptable, and while clearly it could have been improved, it was now sufficient to my needs.

    Read the article

  • best way to "introduce" OOP/OOD to team of experienced C++ engineers

    - by DXM
    I am looking for an efficient way, that also doesn't come off as an insult, to introduce OOP concepts to existing team members? My teammates are not new to OO languages. We've been doing C++/C# for a long time so technology itself is familiar. However, I look around and without major infusion of effort (mostly in the form of code reviews), it seems what we are producing is C code that happens to be inside classes. There's almost no use of single responsibility principle, abstractions or attempts to minimize coupling, just to name a few. I've seen classes that don't have a constructor but get memset to 0 every time they are instantiated. But every time I bring up OOP, everyone always nods and makes it seem like they know exactly what I'm talking about. Knowing the concepts is good, but we (some more than others) seem to have very hard time applying them when it comes to delivering actual work. Code reviews have been very helpful but the problem with code reviews is that they only occur after the fact so to some it seems we end up rewriting (it's mostly refactoring, but still takes lots of time) code that was just written. Also code reviews only give feedback to an individual engineer, not the entire team. I am toying with the idea of doing a presentation (or a series) and try to bring up OOP again along with some examples of existing code that could've been written better and could be refactored. I could use some really old projects that no one owns anymore so at least that part shouldn't be a sensitive issue. However, will this work? As I said most people have done C++ for a long time so my guess is that a) they'll sit there thinking why I'm telling them stuff they already know or b) they might actually take it as an insult because I'm telling them they don't know how to do the job they've been doing for years if not decades. Is there another approach which would reach broader audience than a code review would, but at the same time wouldn't feel like a punishment lecture? I'm not a fresh kid out of college who has utopian ideals of perfectly designed code and I don't expect that from anyone. The reason I'm writing this is because I just did a review of a person who actually had decent high-level design on paper. However if you picture classes: A - B - C - D, in the code B, C and D all implement almost the same public interface and B/C have one liner functions so that top-most class A is doing absolutely all the work (down to memory management, string parsing, setup negotiations...) primarily in 4 mongo methods and, for all intents and purposes, calls almost directly into D. Update: I'm a tech lead(6 months in this role) and do have full support of the group manager. We are working on a very mature product and maintenance costs are definitely letting themselves be known.

    Read the article

  • Why would you dual-run an app on Azure and AWS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/11/10/why-would-you-dual-run-an-app-on-azure-and-aws.aspxI had this question from a viewer of my Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS, and thought I’d publish the response. So why would you dual-run your cloud app by hosting it on Azure and AWS? Sounds like a lot of extra development and management overhead. Well the most compelling reasons are reliability and portability. In 2012 I was working for a client who was making a big investment in the cloud, and at the end of the year we published their first external API for business partners. It was hosted in Azure and used some really nice features to route back into existing on-premise services. We were able to publish a clean, simple API to partners, and hide away the underlying complexity of the internal services while still leveraging them to do all the work. Two days after we went live, we were hit by the Azure SSL certificate expiry outage, and our API was unavailable for the best part of 3 days. Fortunately we had planned a gradual roll-out to partners, so the impact was minimal, but we’d been intending to ramp up quickly, and if the outage had happened a week or two later we would have been in a very bad place. Not least because our app could only run on Azure, we couldn’t package it up for another service without going back and reworking the code. More recently AWS had an issue with a networking device in one of their data centres which caused an outage that took the best part of a day to resolve. In both scenarios the SLAs are worthless, as you’ll get back a small percentage of your cloud expenditure, which is going to be negligible compared to your costs in dealing with the outage. And if your app is built specifically for AWS or Azure then if there’s an extended outage you can’t just deploy it onto a new set of kit from a different supplier. And the chances are pretty good there will be another extended outage, both for Microsoft and for Amazon. But the chances are small that it will happen to both at the same time. So my basic guidance has been: ignore the SLAs, go for better uptime by using two clouds. As soon as you need to scale beyond a single instance, start by scaling out to another cloud. Then scale out to different data centres in both clouds. Then you’ve got dual-cloud, quadruple-datacentre redundancy, so any more scaling you need can be left to the clouds to auto-scale themselves. By running in both clouds, you’ve made your app portable, so in the highly unlikely event that both AWS and Azure go down in multiple regions, you’ll have a deployment package which will let you spin up a new stack on yet another cloud, without having to rework your solution.

    Read the article

  • Circle-Rectangle collision in a tile map game

    - by furiousd
    I am making a 2D tile map based putt-putt game. I have collision detection working between the ball and the walls of the map, although when the ball collides at the meeting point between 2 tiles I offset it by 0.5 so that it doesn't get stuck in the wall. This aint a huge issue though. if(y % 20 == 0) { y+=0.5; } if(x % 20 == 0) { x+=0.5; } Collisions work as follows Find the closest point between each tile and the center of the ball If distance(ball_x, ball_y, close_x, close_y) <= ball_radius and the closest point belongs to a solid object, collision has occured Invert X/Y speed according to side of object collided with The next thing I tried to do was implement floating blocks in the middle of the map for the ball to bounce off of. When a ball collides with a corner of the block, it gets stuck in it. So I changed my determineRebound() function to treat corners as if they were circles. Here's that functon: `i and j are indexes of the solid object in the 2d map array. x & y are centre point of ball.` void determineRebound(int _i, int _j) { if(y > _i*tile_w && y < _i*tile_w + tile_w) { //Not a corner xs*=-1; } else if(x > _j*tile_w && x < _j*tile_w + tile_w) { //Not a corner ys*=-1; } else { //Corner float nx = x - close_x; float ny = y - close_y; float len = sqrt(nx * nx + ny * ny); nx /= len; ny /= len; float projection = xs * nx + ys * ny; xs -= 2 * projection * nx; ys -= 2 * projection * ny; } } This is where things have gotten messy. Collisions with 'floating' corners work fine, but now when the ball collides near the meeting point of 2 tiles, it detects a corner collision and does not rebound as expected. I'm a bit in over my head at this point. I guess I'm wondering if I'm going about making this sort of game in the right way. Is a 2d tile map the way to go? If so, is there a problem with my collision logic and where am I going wrong? Any advice/feedback would be great.

    Read the article

  • Wrapping REST based Web Service

    - by PaulPerry
    I am designing a system that will be running online under Microsoft Windows Azure. One component is a REST based web service which will really be a wrapper (using proxy pattern) which calls the REST web services of a business partner, which has to do with BLOB storage (note: we are not using azure storage). The majority of the functionality will be taking a request, calling our partner web service, receiving the request and then passing that back to the client. There are a number of reasons for doing this, but one of the big ones is that we are going to support three clients: our desktop application (win and mac), mobile apps (iOS), and a web front end. Having a single API which we then send to our partner protects us if that partner ever changes. I want our service to support both JSON and XML for the data transfer format, JSON for web and probably XML for the desktop and mobile (we already have an XML parser in those products). Our partner also supports both of these formats. I was planning on using ASP.NET MVC 4 with the Web API. As I design this, the thing that concerns me is the static type checking of C#. What if the partner adds or removes elements from the data? We can probably defensively code for that, but I still feel some concern. Also, we have to do a fair amount of tedious coding, to setup our API and then to turn around and call our partner’s API. There probably is not much choice on it though. But, in the back of my mind I wonder if maybe a more dynamic language would be a better choice. I want to reach out and see if anybody has had to do this before, what technology solutions they have used to (I am not attached to this one, these days Azure can host other technologies), and if anybody who has done something like this can point out any issues that came up. Thanks! Researching the issue seems to only find solutions which focus on connecting a SOAP web service over a proxy server, and not what I am referring to here. Note: Cross posted (by suggestion) from http://stackoverflow.com/questions/11906802/wrapping-rest-based-web-service Thank you!

    Read the article

  • Bootloader Problems Grub Won't Load Windows 7

    - by user108805
    I sent this to [email protected], still no response thought I could get a faster solution here. I am running Windows 7 64-bit and Ubuntu 12.04 LTS on separate partitions. The message is sent is: Boot-Repair URL: http://paste.ubuntu.com/1365163/ Originally I was unable to access Ubuntu after a windows update (Ubuntu was installed using wubi). Rather than logging into Ubuntu from the Windows 7 Bootloader, it lead to the grub command prompt. No matter what I did here, it would not log me into linux. As a result I uninstalled Ubuntu from the Add/Remove Programs application in Windows 7. I then re-installed Ubuntu 12.04 LTS using a liveCD-USB. This time however, I created a partition. I then restarted and got the GRUB bootloader which loads Ubuntu 12.04 LTS with no problems, however when I select windows (listed as "Windows 7 (loader)"), it just refreshes the grub bootloader instead of loading Windows 7. I then used the Windows 7 repair disk to run bootrec/fixmbr and bootrec/fixboot. This led to no bootloader coming up when I started my computer. Instead I got a blank black screen with a flashing white cursor. I went on to do a bootrec/buildbcd and bootrec/scanos. These did nothing to change the situation. When I ran bootrec/scanos it said that no Windows 7 installations were present. After this I decided to reinstall WIndows 7 only for this to do nothing to change the situation. Afterwards I did a boot-repair in which I began to get the GRUB bootloader, which would load ubuntu 12.04 LTS, but still would not load Windows 7. I also did a sudo update-grub which recognized Windows 7 as being installed, but still didn't fix the issue of loading Windows 7. While running Ubuntu I have no problem accessing my WIndows 7 partition which is formatted as NTFS. It shows all the files and folders reflecting that the re-install did take place, and it also shows all of my old applications and folders in the Windows.old folder. I am completely stuck at this point and have no clue what I should do next. Any help you can offer me will be greatly appreciate. Thank You --gap

    Read the article

  • Revenue Recognition: Performance Obligation Pass a Hurdle

    - by Theresa Hickman
    I met up with Seamus Moran, our resident accounting expert, to get his thoughts about the latest happenings with IFRS. Last week, on March 13,  the comment period on the FASB and IASB exposure draft “Revenue From Contracts with Customers” closed.  FASB and IASB have just over 20 comment letters – a very small number.  The implication is that that the exposure draft does reflect general acceptance, and therefore will be published as both a US and Internationally Generally Accepted Accounting Standard. At a recent conference call, FASB and IASB expected to complete their report to both Boards on the comments by early summer, complete their deliberation of the comments by the fall and draft the final standard text by late this year. It is assumed the concept of Performance Obligations would become US GAAP and IFRS in place of the existing standards.  They confirmed that all existing US GAAP and IFRS guidelines would be withdrawn, and that they were in dialogue with the SEC on withdrawing the SEC guidelines on the revenue issue as well.The open question is when will Performance Obligations become effective?  The Boards have said that they would like this Revenue Recognition standard and the the Lease Accounting standard to be effective at the same time because what isn’t either insurance, interest, or a lease is a revenue arrangement.  However, ascertaining what is generally acceptable in respect of Leases is proving a little elusive, and the Boards have recently diverged a little on the P&L side of the accounting (although both are in agreement that there will be no off-balance sheet leases).  It is therefore likely that the Lease standard might be delayed. One wonders if the Boards will  define effectivity of the Revenue standard independently of the Lease standard or if they will stick with their resolve to make them co-effective.  The Boards have also said that neither standard will be effective before June 2015.Here is the gist of the new Revenue Recognition principle and the steps to apply it:Recognize revenue to depict the transfer of goods or services in an amount that reflects the consideration expected to be entitled in exchange for those goods and services.Steps to apply the core principles: Identify the contract with the customer Identify the separate performance obligations Determine the transaction price Allocate the the transaction price Recognize Revenue when a performance obligation is satisfied  

    Read the article

  • How to handle this unfortunately non hypothetical situation with end-users?

    - by User Smith
    I work in a medium sized company but with a very small IT force. Last year (2011), I wrote an application that is very popular with a large group of end-users. We hit a deadline at the end of last year and some functionality (I will call funcA from now on) was not added into the application that was wanted at the very end. So, this application has been running in live/production since the end of 2011, I might add without issue. Yesterday, a whole group of end-users started complaining that funcA that was never in the application is no longer working. Our priority at this company is that if an application is broken it must be fixed first prior to prioritized projects. I have compared code and queries and there is no difference since 2011, which is proofA. I then was able to get one of the end-users to admit that it never worked proofB, but since then that end-user has went back and said that it was working previously......I believe the horde of end-users has assimilated her. I have also reviewed my notes for this project which has requirements and daily updates regarding the project which specifically states, "funcA not achieved due to time constraints", proofC. I have spoken with many of them and I can see where they could be confused as they are very far from a programming background, but I also know they are intelligent enough to act in a group in order to bypass project prioritization orders in order to get functionality that they want to make their job easier. The worst part is is that now group think is setting in and my boss and the head of IT is actually starting to believe them, even though there is no code or query changes. As far as reviewing the state of the logic it is very cut and dry to the point of if 1 = 1, funcA will not work. So, this is the end of the description of my scenario, but I am trying not to get severally dinged on my performance metrics due to this which would essentially have me moved to fixing a production problem that doesn't exist that will probably take over 1 month. I am looking for direct answers to this question. This question is not for rants, polling, or discussions as this is not the format for StackExchange. Please don't downvote me too terribly it is pretty common on this specific site of stack, I am looking for honest answers to this situation and I couldn't find a forum more appropriate.

    Read the article

  • Confusion about inheritance

    - by Samuel Adam
    I know I might get downvoted for this, but I'm really curious. I was taught that inheritance is a very powerful polymorphism tool, but I can't seem to use it well in real cases. So far, I can only use inheritance when the base class is an abstract class. Examples : If we're talking about Product and Inventory, I quickly assumed that a Product is an Inventory because a Product must be inventorized as well. But a problem occured when user wanted to sell their Inventory item. It just doesn't seem to be right to change an Inventory object to it's subtype (Product), it's almost like trying to convert a parent to it's child. Another case is Customer and Member. It is logical (at least for me) to think that a Member is a Customer with some more privileges. Same problem occurred when user wanted to upgrade an existing Customer to become a Member. A very trivial case is the Employee case. Where Manager, Clerk, etc can be derived from Employee. Still, the same upgrading issue. I tried to use composition instead for some cases, but I really wanted to know if I'm missing something for inheritance solution here. My composition solution for those cases : Create a reference of Inventory inside a Product. Here I'm making an assumption about that Product and Inventory is talking in a different context. While Product is in the context of sales (price, volume, discount, etc), Inventory is in the context of physical management (stock, movement, etc). Make a reference of Membership instead inside Customer class instead of previous inheritance solution. Therefor upgrading a Customer is only about instantiating the Customer's Membership property. This example is keep being taught in basic programming classes, but I think it's more proper to have those Manager, Clerk, etc derived from an abstract Role class and make it a property in Employee. I found it difficult to find an example of a concrete class deriving from another concrete class. Is there any inheritance solution in which I can solve those cases? Being new in this OOP thing, I really really need a guidance. Thanks!

    Read the article

  • LWJGL - Mixing 2D and 3D

    - by nathan
    I'm trying to mix 2D and 3D using LWJGL. I have wrote 2D little method that allow me to easily switch between 2D and 3D. protected static void make2D() { glEnable(GL_BLEND); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); glOrtho(0.0f, SCREEN_WIDTH, SCREEN_HEIGHT, 0.0f, 0.0f, 1.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); } protected static void make3D() { glDisable(GL_BLEND); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); // Reset The Projection Matrix GLU.gluPerspective(45.0f, ((float) SCREEN_WIDTH / (float) SCREEN_HEIGHT), 0.1f, 100.0f); // Calculate The Aspect Ratio Of The Window GL11.glMatrixMode(GL11.GL_MODELVIEW); glLoadIdentity(); } The in my rendering code i would do something like: make2D(); //draw 2D stuffs here make3D(); //draw 3D stuffs here What i'm trying to do is to draw a 3D shape (in my case a quad) and i 2D image. I found this example and i took the code from TextureLoader, Texture and Sprite to load and render a 2D image. Here is how i load the image. TextureLoader loader = new TextureLoader(); Sprite s = new Sprite(loader, "player.png") And how i render it: make2D(); s.draw(0, 0); It works great. Here is how i render my quad: glTranslatef(0.0f, 0.0f, 30.0f); glScalef(12.0f, 9.0f, 1.0f); DrawUtils.drawQuad(); Once again, no problem, the quad is properly rendered. DrawUtils is a simple class i wrote containing utility method to draw primitives shapes. Now my problem is when i want to mix both of the above, loading/rendering the 2D image, rendering the quad. When i try to load my 2D image with the following: s = new Sprite(loader, "player.png); My quad is not rendered anymore (i'm not even trying to render the 2D image at this point). Only the fact of creating the texture create the issue. After looking a bit at the code of Sprite and TextureLoader i found that the problem appears after the call of the glTexImage2d. In the TextureLoader class: glTexImage2D(target, 0, dstPixelFormat, get2Fold(bufferedImage.getWidth()), get2Fold(bufferedImage.getHeight()), 0, srcPixelFormat, GL_UNSIGNED_BYTE, textureBuffer); Commenting this like make the problem disappear. My question is then why? Is there anything special to do after calling this function to do 3D? Does this function alter the render part, the projection matrix?

    Read the article

  • Failed to start up after upgrading software

    - by Landy
    I asked this question in SuperUser one hour ago, then I know this community so I moved the question here... I've been running Ubuntu 10.10 in a physical x86-64 machine. Today Update Manager reminded me that there are some updates to install and I confirmed the action. I should had read the update list but I didn't. I can only remember there is an update about cups. After the upgrading, Update Manager requires a restart and I confirmed too. But after the restart, the computer can't start up. There are errors in the console. Begin: Running /scripts/init-premount ... done. Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done. [xxx]usb 1-8: new high speed USB device using ehci_hcd and address 3 [xxx]usb 2-1: new full speed USB device using ohci_hcd and address 2 [xxx]hub 2-1:1.0: USB hub found [xxx]hub 2-1:1.0: 4 ports detected [xxx]usb 2-1.1: new low speed USB device using ohci_hcd and address 3 Gave up waiting for root device. Common probles: - Boot args (cat /proc/cmdline) - Check rootdelay=(did the system wait long enough) - Check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory ALERT! /dev/sda1 does not exist. Dropping to a shell! BusyBox v1.15.3 (Ubuntu 1:1.15.3-1ubuntu5) built-in shell(ash) Enter 'help' for a list of built-in commands. (initramfs)[cursor is here] At the moment, I can't input anything in the console. The keyboard doesn't work at all. What's wrong? How can I check boot args or "root=" as suggested? How can I fix this issue? Thanks. =============== PS1: the /dev/sda1 is type ext4 (rw,nosuid,nodev) PS2: the /dev/sda1 can be mounted and accessed successfully under SUSE 11 SP1 x64. PS3: From this link, I think the keyboard doesn't work because the USB driver is not loaded at that time.

    Read the article

  • Several New Hints

    - by Ondrej Brejla
    Hi all! Today we would like to introduce you some of our new experimental hints for NetBeans 7.2. They are called: Unused Use Statement and Immutable Variables. Unused Use Statement This hint is quite simple. It highlights (underlines) your use statements, which are not used. Typical use case is after some refactoring, when you forgot to remove some obsolete use statements. This hint warns you on them and allows you to remove them easily. Just click on the hint bulb in the gutter and select Remove Unused Use Statement. And of course, it works in multiple use statements combined too. Immutable Variables The next one is the hint which checks too many assignments into a variable. And why? That's simple. Mostly you should use just one assignment into one variable. But sometimes you are lazy and you do something like: But it's quite wrong, because what you really do is: And that's exactly the case, when our new hint warns you, that Too many assignments (2) into variable $foo occured. Nothing more. Yes, we know that there are some cases, where could be more assignments and no warning should occur, e.g.: Because maybe one likes longer increment syntax more than the short one. So we tried to handle these cases to don't bother you if it's not a need. Note: We are almost sure that this hint doesn't cover all your use cases, because there are a lot of them. So if you find something strange, write it into our bugzilla so we can handle it better for you. Thanks for your patience! And the last thing is, that you can set the number of allowed assignments in Tools -> Options -> Editor -> Hints -> PHP: Immutable Variables. Note: This hint works just for a common variables, not for fields. We have an enhancement request for that and it should be implemented in next version of NetBeans (probably 7.3). And that's all for today and as usual, please test it and if you find something strange, don't hesitate to file a new issue (product php, component Editor). Thanks.

    Read the article

  • Paranoid management, contractor checking work [closed]

    - by user833345
    Just wanted to get some opinions and experiences on an issue I'm having at work. First, a little background. I've been working at a company for some time (past any probation periods) and rewriting a horrendous system. No tests, incomplete and broken functionality everywhere, enough copypasta to feed a small village, redundant code, more unused SQL tables than used ones and terrible performance. I've never seen such bad code, pretty much all of it is worthy of being posted on TheDailyWTF. The company has been operating for a number of years and have had a string of bad developers working on this system. I made a call on rewriting instead of refactoring since I judged it to be less work overall and decided that the result will address the requirements more appropriately, since the central requirement is to have a future-proof system for the next decade with plenty of room to scale up. Refactoring would have entailed untangling a huge ball of yarn and at the same time integrating it with a proper foundation or building a foundation from scratch. I've introduced the latest spiffy framework, unit & functional testing, CI, a bug tracker and agile workflow to the environment. I've fixed most of the performance issues of the old system (there were no indexes on any of the tables, for example). I've created an automated deployment process for the old system. The CTO has been maintaining the old system while I have been building the new one and he has been advising management that everything is being done as per best practices. However, management is hiring a contractor to come in and verify my work. In my experience, this is unprecedented. I can understand their reasoning to an extent, since they've had bad luck in the past, but can't help but feel somewhat offended at the fact that they distrust two senior developers who have been working with them for some time enough that a third party is being brought in. And it's not just me who is under watch - people's emails are constantly checked, someone had a remote desktop application installed on their computer of which I was asked to check the usage logs to try to determine if they were stealing sensitive data and there are CCTV cameras in one of the rooms. It's the first time I've decided to disable my Skype history at work. Am I right to feel indignant here? Has anyone else ever encountered such a situation? If so, how did it work out in the end? Was it worth sticking around? Should I just find another job?

    Read the article

  • Headaches using distributed version control for traditional teams?

    - by J Cooper
    Though I use and like DVCS for my personal projects, and can totally see how it makes managing contributions to your project from others easier (e.g. your typical Github scenario), it seems like for a "traditional" team there could be some problems over the centralized approach employed by solutions like TFS, Perforce, etc. (By "traditional" I mean a team of developers in an office working on one project that no one person "owns", with potentially everyone touching the same code.) A couple of these problems I've foreseen on my own, but please chime in with other considerations. In a traditional system, when you try to check your change in to the server, if someone else has previously checked in a conflicting change then you are forced to merge before you can check yours in. In the DVCS model, each developer checks in their changes locally and at some point pushes to some other repo. That repo then has a branch of that file that 2 people changed. It seems that now someone must be put in charge of dealing with that situation. A designated person on the team might not have sufficient knowledge of the entire codebase to be able to handle merging all conflicts. So now an extra step has been added where someone has to approach one of those developers, tell him to pull and do the merge and then push again (or you have to build an infrastructure that automates that task). Furthermore, since DVCS tends to make working locally so convenient, it is probable that developers could accumulate a few changes in their local repos before pushing, making such conflicts more common and more complicated. Obviously if everyone on the team only works on different areas of the code, this isn't an issue. But I'm curious about the case where everyone is working on the same code. It seems like the centralized model forces conflicts to be dealt with quickly and frequently, minimizing the need to do large, painful merges or have anyone "police" the main repo. So for those of you who do use a DVCS with your team in your office, how do you handle such cases? Do you find your daily (or more likely, weekly) workflow affected negatively? Are there any other considerations I should be aware of before recommending a DVCS at my workplace?

    Read the article

  • Building SANE from git-source produce backend missmatch on 12.04 even if built locally

    - by deinonychusaur
    It seems to me that with Ubuntu Precise Pangolin it is all but easy to do a proper install of SANE from source (git-repo). I've found other scanning issues trying to find an answer to this, where the output people posted seems to indicate they suffer the same issue (unknowingly). If I run on a fresh install of Ubuntu 12.04 with compiled SANE source from the git I get: $ scanimage -V scanimage (sane-backends) 1.0.24git; backend version 1.0.22 (I basically followed the instructions on http://ubuntuportal.com/2012/02/how-to-get-an-canon-canoscan-lide-100-scanner-to-work-in-ubuntu-11-10linux-mint-12.html since I didn't find any other information making sure that sane was not installed prior to installation.) My primary interest is the epson2-backend. In 1.0.22 it offers the wrong TPU settings for Epson V700 (TPU2-mode wasn't supported in 1.0.22, and the scanner is useless to me if I don't have the TPU2-support). Since if I ask it to enter transparency mode, it shows 1.0.22 behaviour, it implies that the epson2-backend comes from 1.0.22 and not 1.0.24 even though I just built it. If I install SANE with prefix to a local folder and run that version of scanimage it still produces the mismatch. However, on another computer where I installed a custom 1.0.22 build of SANE prior to upgrading to Ubuntu 12.04, I can build and install the same SANE-git locally and have it correctly match backends: $ ./SANE/bin/scanimage -V scanimage (sane-backends) 1.0.24git; backend version 1.0.24 $ scanimage -V scanimage (sane-backends) 1.0.22; backend version 1.0.22 On this computer the 1.0.24 works correctly in finding TPU2 on Epson V700. So what am I missing/doing wrong? (And I want to replace 1.0.22 with 1.0.24 for the whole system, the local build was just debugging.) Any help would be much appreciated. Edit 1: Just tried compiling SANE using this instruction on Ubuntu 10.04 and it worked like a charm. However, when I upgraded to 12.04 (really would like to run 12.04), SANE was downgraded to 1.0.22. When trying the same set of instructions on 12.04 I was still out of luck -- the backend missmatch was there again (and I do have libusb-dev installed) Edit 2: I updated to Ubuntu 12.10 which now has the 1.0.23 SANE drivers. I haven't dared trying to compile from source on 12.10 since 1.0.23 is good enough for me. This is just a work-around and I would still like to know what's up with Ubuntu 12.04.

    Read the article

  • Parallel Port Problem in 12.04

    - by Frank Oberle
    I have a “dumb” printer attached to a parallel port in my machine which works fine under the “other” resident operating system (from Redmond) on the same machine. I recently added Ubuntu 12.04 as a dual boot on the machine, but Ubuntu doesn't seem to recognize the parallel port at all. All I need to set up a printer is a really plain-vanilla fixed pitch text-only generic driver, which is present, but no parallel ports show up. (The other printers, all on USB ports, seem to work just fine). Following what appeared to me to be the most reasonable of the many conflicting pieces of advice on the web, here's what I did: I added the following lines to /etc/modules parport_pc ppdev parport Then, after rebooting, I checked to see that the lines were still present, and they were. I ran dmesg | grep par and got the following references in the output that seemed like they might have to do with the parallel port: [ 14.169511] parport_pc 0000:03:07.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [ 14.169516] PCI parallel port detected: 9710:9805, I/O at 0xce00(0xcd00), IRQ 21 [ 14.169577] parport0: PC-style at 0xce00 (0xcd00), irq 21, using FIFO [PCSPP,TRISTATE,COMPAT,ECP] [ 14.354254] lp0: using parport0 (interrupt-driven). [ 14.571358] ppdev: user-space parallel port driver [ 16.588304] type=1400 audit(1347226670.386:5): apparmor="STATUS" operation="profile_load" name="/usr/lib/cups/backend/cups-pdf" pid=964 comm="apparmor_parser" [ 16.588756] type=1400 audit(1347226670.386:6): apparmor="STATUS" operation="profile_load" name="/usr/sbin/cupsd" pid=964 comm="apparmor_parser" [ 16.673679] type=1400 audit(1347226670.470:7): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=1010 comm="apparmor_parser" [ 16.675252] type=1400 audit(1347226670.470:8): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=1014 comm="apparmor_parser" [ 16.675716] type=1400 audit(1347226670.470:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=1014 comm="apparmor_parser" [ 16.676636] type=1400 audit(1347226670.474:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/cups/backend/cups-pdf" pid=1015 comm="apparmor_parser" [ 16.677124] type=1400 audit(1347226670.474:11): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1015 comm="apparmor_parser" [ 1545.725328] parport0: ppdev0 forgot to release port I have no idea what any of that means, but the line “parport0: ppdev0 forgot to release port ” seems unusual. I was still unable to add a printer for my old clunker, so I tried the direct approach, typing echo “Hello” > /dev/lp0 and received a Permission denied message. I then tried echo “Hello” > /dev/parport0 which didn't give me any message at all, but still didn't print anything. Running the command sudo /usr/lib/cups/backend/parallel gives the following: direct parallel:/dev/lp0 "unknown" "LPT #1" "" "" Checking the permissions for /dev/parport0, Owner, Group, and Other are all set to read and write. crw-rw---- 1 root lp 6, 0 Sep 9 16:37 /dev/lp0 crw-rw-rw- 1 root lp 99, 0 Sep 9 16:37 /dev/parport0 The output of the command lpinfo -v includes the following line: direct parallel:/dev/lp0 I've read several web postings that seem to suggest this has been a problem for several years, but the bug reports were closed because there wasn't enough information to address the issue (shades of Microsoft!). Any suggestions as to what I might be missing here?

    Read the article

  • Are we ready for the Cloud computing era?

    - by andrewkatumba
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} "Elite?" developer circles are abuzz with the notion of Cloud computing . The increasing bandwidth, the desire for faster and leaner operations and ofcourse the need for outsourcing non core it related business requirements e.g wordprocessing, spreadsheets, data backups. In strolls Chrome OS (am sure other similar OSes will join with their own wagons for us to jump on), offering just that, internet based services(more like a repository of), quick efficient and "reliable" and for the most part cheap and often time even free! And we all go rhapsodic!  It boils down to the age old dilemma, "if the cops are so busy protecting us then who will protect them" (even the folks back at Hollywood keep us reminded)! Who is going to ensure that these internet based services do not go down(either intentionally or by some malicious third party) leading to a multinational colossal disaster .At the risk of sounding pessimistic,  IT IS NOT AN ISSUE OF TRUST, this is but a mere case of Murphy's Law!  What then? Should the "cloud" be trusted to this extent at this time?  This is an era where challenges are rapidly solved with lightning promptness to "beat the competition", my hope is that our solutions are not just creating problems that we may not be able to solve!  Keeping my ear on the Ground.

    Read the article

  • Why does creating dynamic bodies in JBox2D freeze my app?

    - by Amplify91
    My game hangs/freezes when I create dynamic bullet objects with Box2D and I don't know why. I am making a game where the main character can shoot bullets by the user tapping on the screen. Each touch event spawns a new FireProjectileEvent that is handled properly by an event queue. So I know my problem is not trying to create a new body while the box2d world is locked. My bullets are then created and managed by an object pool class like this: public Projectile getProjectile(){ for(int i=0;i<mProjectiles.size();i++){ if(!mProjectiles.get(i).isActive){ return mProjectiles.get(i); } } return mSpriteFactory.createProjectile(); } mSpriteFactory.createProjectile() leads to the physics component of the Projectile class creating its box2d body. I have narrowed the issue down to this method and it looks like this: public void create(World world, float x, float y, Vec2 vertices[], boolean dynamic){ BodyDef bodyDef = new BodyDef(); if(dynamic){ bodyDef.type = BodyType.DYNAMIC; }else{ bodyDef.type = BodyType.STATIC; } bodyDef.position.set(x, y); mBody = world.createBody(bodyDef); PolygonShape dynamicBox = new PolygonShape(); dynamicBox.set(vertices, vertices.length); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = dynamicBox; fixtureDef.density = 1.0f; fixtureDef.friction = 0.0f; mBody.createFixture(fixtureDef); mBody.setFixedRotation(true); } If the dynamic parameter is set to true my game freezes before crashing, but if it is false, it will create a projectile exactly how I want it just doesn't function properly (because a projectile is not a static object). Why does my program fail when I try to create a dynamic object at runtime but not when I create a static one? I have other dynamic objects (like my main character) that work fine. Any help would be greatly appreciated. This is a screenshot of a method profile I did: Especially notable is number 8. I'm just still unsure what I'm doing wrong. Other notes: I am using JBox2D 2.1.2.2. (Upgraded from 2.1.2.1 to try to fix this problem) When the application freezes, if I hit the back button, it appears to move my game backwards by one update tick. Very strange.

    Read the article

  • PASS Summit Location follow up - result analysis

    - by simonsabin
    I've had a chance to look at the results directly and it is clear that there is a tough choice. On the one hand people are saying that they prefer to have PASS put their money into chapters and things like 24hrs of PASS rather than an event on the east coast. Whilst at the same time almost 50% more people said they would be more likely to attend an East Coast event than a Seattle event, and 60% more said they would be more likley to attend a US Central region event. Whats more 60% said that the summit should be outside of Seattle every other year with only 19% saying it should always stay in Seattle. So clearly there is a huge desire for a non Seattle event. Looking at the other reasons for keeping in Seattle and the big one being that people want Microsoft speakers. More people think its somewhat important of very important that the conference is in walking distance of the hotels and restaurants. Essentially the Q6 questions show an even balance for normal conference, highlighting that they are prepared to travel, not with the family and they want a well laid out conference. Whats very annoying is that the questions, as people have commented, were biased towards certain answers. For instance there was no option about whether people feel its important to have industry leading speakers, MVPs etc at the conference. Only questions about Microsoft speakers. I know survey writing is very difficult to avoid biasing the answers one way or another. There was also no choice to show peoples preference, would people prefer Microsoft speakers or the summit to be held on the East Coast/Central US. I also find it amazing that people prefer hundres of developers rather than the SQLCAT and CSS teams, surely that indicates another issue about a lack of understanding of what the these teams do. All in all it is clear that people showed they want an event outside of Seattle and don't want PASS to be putting money into that instead of into other community activites. I find it suprising that there appears to have been a huge weighting against certain questions which have prioritised them over the huge desire for a PASS summit outside of Seattle. Lets see where we will be in 2013 or maybe they will rethink 2012 who knows.

    Read the article

  • Marshalling C# Structs into DX11 cbuffers

    - by Craig
    I'm having some issues with the packing of my structure in C# and passing them through to cbuffers I have registered in HLSL. When I pack my struct in one manner the information seems to be able to pass to the shader: [StructLayout(LayoutKind.Explicit, Size = 16)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; [FieldOffset(12)] public int type; } This works perfectly when used against this HLSL fragment: cbuffer PerFrame : register(b0) { Vector3 eyePos; int type; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } However, when I use the following structure definitions... // Note this is 16 because HLSL packs in 4 float 'chunks'. // It is also simplified, but still demonstrates the problem. [StructLayout(Layout.Explicit, Size = 16)] internal struct InternalTestStruct { [FieldOffset(0)] public int type; } [StructLayout(LayoutKind.Explicit, Size = 32)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; //Missing 4 bytes here for correct packing. [FieldOffset(16)] public InternalTestStruct mInternal; } ... the following HLSL fragment no longer works. struct InternalType { int type; } cbuffer PerFrame : register(b0) { Vector3 eyePos; InternalType internalStruct; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(internaltype.type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } Is there a problem with the way I am packing the struct, or is it another issue? To re-iterate: I can pass a struct in a cbuffer so long as it does not contain a nested struct.

    Read the article

  • Sync Google Calendar with SharePoint Calendar

    - by dataintegration
    The ADO.NET Providers for Google and SharePoint make it easy to retrieve and update data in both Google's web services and SharePoint. This article shows how the SQL interface to data makes it easy to build applications that need to move data from one source to another. The application described here is a demo Windows application that synchronizes calendar events between Google and SharePoint, but the RSSBus Providers can be used to achieve integrations on both the .NET and the Java platforms, including more sophisticated features like full automation. Getting the Events Step 1: Google accounts can have several calendars. Obtain a list of a user's Google Calendars by issuing a query to the Calendars table. For example: SELECT * FROM Calendars. Step 2: In order to get a list of the events from a given Google Calendar, issue a query to the CalendarEvents table while specifying the CalendarId from the Calendars table. The resulting events can be further filtered by using the StartDateTime or EndDateTime columns. For example: SELECT * FROM CalendarEvents WHERE (CalendarId = '[email protected]') AND (StartDateTime >= '1/1/2012') AND (StartDateTime <= '2/1/2012') Step 3: SharePoint stores data in Lists. There are various types of lists, e.g., document lists and calendar lists. A SharePoint account can have several lists of the same type. To find all the calendar lists in SharePoint, use the ListLists stored procedure and inspect the BaseTemplate column. Step 4: The SharePoint data provider models each SharPoint list as a table. Get the events in a particular calendar by querying the table with the same name as the list. The events may be filtered further by specifying the EventDate or EndDate columns. For example: SELECT * FROM Calendar WHERE (EventDate >= '1/1/2012') AND (EventDate <= '2/1/2012') Synchronizing the Events Synchronizing the events is a simple process. Once the events from Google and SharePoint are available they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete events as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for SharePoint V2, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the SharePoint ADO.NET Data Provider V2, which can be obtained here.

    Read the article

  • determine if udp socket can be accessed via external client

    - by JohnMerlino
    I don't have access to company firewall server. but supposedly the port 1720 is open on my one ubuntu server. So I want to test it with netcat: sudo nc -ul 1720 The port is listening on the machine ITSELF: sudo netstat -tulpn | grep nc udp 0 0 0.0.0.0:1720 0.0.0.0:* 29477/nc The port is open and in use on the machine ITSELF: lsof -i -n -P | grep 1720 gateway 980 myuser 8u IPv4 187284576 0t0 UDP *:1720 Checked the firewall on current server: sudo ufw allow 1720/udp Skipping adding existing rule Skipping adding existing rule (v6) sudo ufw status verbose | grep 1720 1720/udp ALLOW IN Anywhere 1720/udp ALLOW IN Anywhere (v6) But I try echoing data to it from another computer (I replaced the x's with the real integers): echo "Some data to send" | nc xx.xxx.xx.xxx 1720 But it didn't write anything. So then I try with telnet from the other computer as well: telnet xx.xxx.xx.xxx 1720 Trying xx.xxx.xx.xxx... telnet: connect to address xx.xxx.xx.xxx: Operation timed out telnet: Unable to connect to remote host Although I don't think telnet works with udp sockets. I ran nmap from another computer within the same local network and this is what I got: sudo nmap -v -A -sU -p 1720 xx.xxx.xx.xx Starting Nmap 5.21 ( http://nmap.org ) at 2013-10-31 15:41 EDT NSE: Loaded 36 scripts for scanning. Initiating Ping Scan at 15:41 Scanning xx.xxx.xx.xx [4 ports] Completed Ping Scan at 15:41, 0.10s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 15:41 Completed Parallel DNS resolution of 1 host. at 15:41, 0.00s elapsed Initiating UDP Scan at 15:41 Scanning xtremek.com (xx.xxx.xx.xx) [1 port] Completed UDP Scan at 15:41, 0.07s elapsed (1 total ports) Initiating Service scan at 15:41 Initiating OS detection (try #1) against xtremek.com (xx.xxx.xx.xx) Retrying OS detection (try #2) against xtremek.com (xx.xxx.xx.xx) Initiating Traceroute at 15:41 Completed Traceroute at 15:41, 0.01s elapsed NSE: Script scanning xx.xxx.xx.xx. NSE: Script Scanning completed. Nmap scan report for xtremek.com (xx.xxx.xx.xx) Host is up (0.00013s latency). PORT STATE SERVICE VERSION 1720/udp closed unknown Too many fingerprints match this host to give specific OS details Network Distance: 1 hop TRACEROUTE (using port 1720/udp) HOP RTT ADDRESS 1 0.13 ms xtremek.com (xx.xxx.xx.xx) Read data files from: /usr/share/nmap OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 2.04 seconds Raw packets sent: 27 (2128B) | Rcvd: 24 (2248B). The only thing I can think of is a firewall or vpn issue. Is there anything else I can check for before requesting that they look at the firewall server again?

    Read the article

  • 2D camera perspective projection from 3D coordinates -- HOW?

    - by Jack
    I am developing a camera for a 2D game with a top-down view that has depth. It's almost a 3D camera. Basically, every object has a Z even though it is in 2D, and similarly to parallax layers their position, scale and rotation speed vary based on their Z. I guess this would be a perspective projection. But I am having trouble converting the objects' 3D coordinates into the 2D space of the screen so that everything has correct perspective and scale. I never learned matrices though I did dig the topic a bit today. I tried without using matrices thanks to this article but every attempt gave awkward results. I'm using ActionScript 3 and Flash 11+ (Starling), where the screen coordinates work like this: Left-handed coordinates system illustration I can explain further what I did if you want to help me sort out what's wrong, or you can directly tell me how you would do it properly. In case you prefer the former, read on. These are images showing the formulas I used: upload.wikimedia.org/math/1/c/8/1c89722619b756d05adb4ea38ee6f62b.png upload.wikimedia.org/math/d/4/0/d4069770c68cb8f1aa4b5cfc57e81bc3.png (Sorry new users can't post images, but both are from the wikipedia article linked above, section "Perspective projection". That's where you'll find what all variables mean, too) The long formula is greatly simplified because I believe a normal top-down 2D camera has no X/Y/Z rotation values (correct ?). Then it becomes d = a - c. Still, I can't get it to work. Maybe you could explain what numbers I should put in a(xyz), c(xyz), theta(xyz), and particularly, e(xyz) ? I don't quite get how e is different than c in my case. c.z is also an issue to me. If the Z of the camera's target object is 0, should the camera's Z be something like -600 ? ( = focal length of 600) Whatever I do, it's wrong. I only got it to work when I used arbitrary calculations that "looked" right, like most cameras with parallax layers seem to do, but that's fake! ;) If I want objects to travel between Z layers I might as well do it right. :) Thanks a lot for your help!

    Read the article

  • What shall I include in a 10 week web technologies course?

    - by Iain
    In September I will be teaching a university module on web technologies. This session will be available to 1st year (freshman) students who don't necessarily have any programming knowledge or know how the web works. In the 2nd semester I will be teaching Flash, which is my specialism, so I know exactly what I am going to teach, but in the 1st semester I will be teaching them web standards technologies - HTML, CSS, JS, jQuery, PHP and MySQL. Where I need advice is how to proportion the emphasis for each part, and which parts of each technology to cover. Another real issue I'm struggling with is how much of the bad old ways should I teach them? Do they need to know about bold as well as strong, etc. UPDATE: based, on your feedback I will only be teaching the latest version of everything - CSS3, HTML5 etc. I'm not sure exactly how long the semester will be but I'm guessing about 10-12 weeks. Each session is a 2 hour lab. Obviously there's only so much I can cover in that time and it will be up to the students to go a research this stuff properly on W3 schools etc. My ideas so far were: Lesson 0 - Course intro and overview of the current tech landscape. What is out there, what will we be learning, what won't we. What is a web server, URL etc. Looking at different example websites and discussing how they work. Lesson 1 - HTML basics (head, body, title, img, table, a, lists, h1, strong etc) Lesson 2 - CSS for styling and layout - fonts, webfonts, float etc Lesson 3 - Intro to programming JS (variables, loops, conditionals, functions) Lesson 4 - more JS programming fundamentals, DOM manipulation Lesson 5 - jQuery - making things fly about and look cool Lesson 6 - XML and Ajax Lesson 7 - PHP basics - syntax, server-side principles Lesson 8 - PHP and MySQL - forms, logins, saving user info Lesson 9 - don't know Lesson 10 - don't know Please let me know if you think this is the right order, what have I missed, how to use any spare sessions etc. Thanks :) UPDATE BASED ON RESPONSES: Thanks for all your responses - some great stuff. To be absolutely clear, this is not a computer science course, it is a practical module on a creative technology course. The emphasis definitely has to be on making cool things work rather than understanding how the backbone of the internet works. That can come later, if the students are interested. At the end of the module I would like the students to be able to produce a web page or pages that does something cool, using some or all of the technologies I cover. Many of these topics are of course far beyond the scope of a 2 hour session, however I do not have the option of reducing the syllabus, I will just have to explain what the technology does and encourage the student to research it in their own time.

    Read the article

< Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >