Search Results

Search found 16987 results on 680 pages for 'second'.

Page 277/680 | < Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • How can I keep a file in Windows 7's cache?

    - by netvope
    Sometimes you know better than Windows what files will be re-used later. Suppose you have 8GB of memory, and you use the same 1GB file every hour in an I/O-bound application (which takes 1 second to finish if the file is cached, and 1 minute if not.) Now you process some other 16GB of data that are not going to be re-used. Naturally the frequently used 1GB file will be pushed out of the cache. It would be beneficial if one can tell Windows to keep that 1GB file in memory. (Better yet, it would be great if I can tell Windows not to cache those 16GB of data, but I'm not optimistic that this can be done.) The poor-man's way to keep a file in the cache would be to keep reading the file. Are there any better ways? Are you aware of any programs that do this? (If this can be easily done under Linux, please let me know too.)

    Read the article

  • nVidia Control Panel Won't Recognize SLI

    - by Nakaan
    So yesterday I bought a brand new, 1,000W PSU to replace my 650W one and to allow for SLI. I install it and my second card (both are GTX 560 ti cards). I reinstalled my drivers and my system recognizes both cards. The control panel says "GTX 560 ti x2", I can select between them for Physx, sound output, picture output (although I only have one hooked up to my monitor), etc. But the kicker is I can't turn on SLI. Everything I've read says I'll get a balloon notification saying I can use SLI and the control panel will have a new section, etc, but none of those things happens. Anyone have any ideas? System: Win7 Professional x64 2x GTX 560 ti 3.1 GHz Intel Core i5 1,000W Xion PSU 8GB RAM (Can't think of my motherboard right now) Note: My PSU and MOBO both say they're SLI certified and I'm using an SLI bridge. Edit: Formatting changes

    Read the article

  • Calculating collision force with AfterCollision/NormalImpulse is unreliable when IgnoreCCD = false?

    - by Michael
    I'm using Farseer Physics Engine 3.3.1 in a very simple XNA 4 test game. (Note: I'm also tagging this Box2D, because Farseer is a direct port of Box2D and I will happily accept Box2D answers that solve this problem.) In this game, I'm creating two bodies. The first body is created using BodyFactory.CreateCircle and BodyType.Dynamic. This body can be moved around using the keyboard (which sets Body.LinearVelocity). The second body is created using BodyFactory.CreateRectangle and BodyType.Static. This body is static and never moves. Then I'm using this code to calculate the force of collision when the two bodies collide: staticBody.FixtureList[0].AfterCollision += new AfterCollisionEventHandler(AfterCollision); protected void AfterCollision(Fixture fixtureA, Fixture fixtureB, Contact contact) { float maxImpulse = 0f; for (int i = 0; i < contact.Manifold.PointCount; i++) maxImpulse = Math.Max(maxImpulse, contact.Manifold.Points[i].NormalImpulse); // maxImpulse should contain the force of the collision } This code works great if both of these bodies are set to IgnoreCCD=true. I can calculate the force of collision between them 100% reliably. Perfect. But here's the problem: If I set the bodies to IgnoreCCD=false, that code becomes wildly unpredictable. AfterCollision is called reliably, but for some reason the NormalImpulse is 0 about 75% of the time, so only about one in four collisions is registered. Worse still, the NormalImpulse seems to be zero for completely random reasons. The dynamic body can collide with the static body 10 times in a row in virtually exactly the same way, and only 2 or 3 of the hits will register with a NormalImpulse greater than zero. Setting IgnoreCCD=true on both bodies instantly solves the problem, but then I lose continuous physics detection. Why is this happening and how can I fix it? Here's a link to a simple XNA 4 solution that demonstrates this problem in action: http://www.mediafire.com/?a1w242q9sna54j4

    Read the article

  • What is the correct way to restart udev in Ubuntu?

    - by zerkms
    I've changed the name of my eth1 interface to eth0. How to ask udev now to re-read the config? service udev restart and udevadm control --reload-rules don't help. So is there any valid way except of rebooting? (yes, reboot helps with this issue) UPD: yes, I know I should prepend the commands with sudo, but either one I posted above changes nothing in ifconfig -a output: I still see eth1, not eth0. UPD 2: I just changed the NAME property of udev-rule line. Don't know any reason for this to be ineffective. There is no any error in executing of both commands I've posted above, but they just don't change actual interface name in ifconfig -a output. If I perform reboot - then interface name changes as expected. UPD 3: let I explain all the case better ;-) For development purposes I write some script that clones virtual machines (VirtualBox-driven) and pre-sets them up in some way. So I perform a command to clone VM, start it and as long as network interface MAC is changed - udev adds the second rule to network persistent rules. Right after machine is booted for the first time there are 2 rules: eth0, which does not exist, as long as it existed in the original VM image MAC eth1, which exists, but all the configuration in all files refers to eth0, so it is not that good for me So I with sed delete the line with eth0 (it is obsolete and useless in cloned image) and replace eth1 with eth0. So currently I have valid persistent rule, but there is still eth1 in /dev. The issue: I don't want to reboot the machine (it will take another time, which is not good thing on building-VM-stage) and just want to have my /dev rebuilt with some command so I have ready-to-use VM without any reboots.

    Read the article

  • How to rotate log files when running pyapns and twistd

    - by Henrik
    I use pyapns (Iphone push server) which in turn uses twisted (twistd daemon). twistd daemon produces twistd.log files. It rotates them to twistd.log.1, twistd.log.2 and so on when twistd.log reaches 1MB but it doesn't use logrotate so I guess it's built in. The problem is that this continues forever and that old log files are never deleted. This eventually fills my disk. I've tried to use logrotate or similar to rotate the logs, but then I would need to run logrotate very very often since I need to rotate BEFORE twistd.log reaches 1MB. This could happen within a second for all I know depending on how much log is produced. So how could I logrotate without hacking pyapns/twistd scripts?

    Read the article

  • How can I convince my boss to invest into the developer environment?

    - by user95291
    Our boss said that developers should have fewer mistakes so the company would have money for displays, servers etc. An always mentioned example is a late firing of an underperforming colleague whose salary would have covered some of these expenses. On the other hand it happened a few times that it took a few days to free up some disk space on our servers since we can't get any more disk. The cost of mandays was definitely higher than the cost of a new HDD. Another example is that we use 14-15" notebooks for development and most of the developers get external displays after they spent one year at the company. The price of a 22-24" display is just a small fraction of a developers annual salary. Devs say that they like the company because of other reasons (high quality code, interesting projects etc.) but this kind of issues not just simply time-consuming but also demotivate them. In the point of view of the developers it seems that the boss always can find an issue in the past which they could have been done better so it's pointless to work better to get for a second display/HDD/whatever. How can I convince my boss to invest more into development environment? Is it possible to break this endless loop?

    Read the article

  • CPU load, USB connection vs. NIC

    - by T.J. Crowder
    In general, and understanding the answer may vary by manufacturer and model (and driver, and...), in consumer-grade workstations with integrated NICs, does the NIC rely on the CPU for a lot of help (as is typically the case with a USB controller, for instance), or is it fairly intelligent and capable on its own (like, say, the typical Firewire controller)? Or is the question too general to answer? (If it matters, you can assume Linux.) Background: I'm looking at connecting a device (digital television capture) that will be delivering ~20-50 Mbit/sec of data to a somewhat under-powered workstation. I can get a USB 2 High-speed device, or a network-attached device, and am interested in avoiding impacting the CPU where possible. Obviously, if it's a 100Mbit NIC, that's roughly half its theoretical inbound bandwidth, whereas it's only roughly a tenth of the 480 Mbit/second the USB 2 "High Speed" interface. But if the latter requires a lot of CPU support and the former doesn't...

    Read the article

  • Fastest booting desktop linux distro? [closed]

    - by Kim
    I'm currently running Ubuntu 9.04 on my laptop and I'm very happy with it. But boot times aren't great... So I'd like to have a second distribution on my hard disc that I can boot to quickly check my email and stuff like that. It really only needs to run firefox and a terminal. Ext4 support would be a plus since my Ubuntu partition is ext4. In the next couple of hours I will try xPUD and DSL. Any other suggestions? EDIT: Tried xpud, hangs on boot.

    Read the article

  • Upgrading Windows 8 Consumer Preview to Release Preview

    - by user1407016
    I currently have my main hard drive split into two partitions. One being Windows 7 with about 600 GB of memory, the other Windows 8 Consumer Preview with about 50 GB. As you can guess it is set up for a dual boot. Today while looking up how to get the C# Facebook SDK for Metro apps I learned about the Release Preview being released. I was wondering: How do I go about getting rid of Windows 8 Consumer Preview and replacing it with the Release Preview? I know i can't just wipe it off my second partition because the dual boot uses Windows 8 to choose the operating system to boot.

    Read the article

  • Collision Resolution

    - by ultifinitus
    Hey all, I'm making a simple side-scrolling game, and I would appreciate some input! My collision detection system is a simple bounding box detection, so it's really easy to implement. However my collision resolution is ridiculous! Currently I have a little formula like this: if (colliding(firstObject,secondObject)) firstObject.resolve_collision(yAxisOffset); if (colliding(firstObject,secondObject)) firstObject.resolve_collision(xAxisOffset); where yAxisOffset is only set if the first object's previous y position was outside the second object's collision frame, respectively xAxisOffset as well. Now this is working great, in general. However there is a single problem. When I have a stack of objects and I push the first object against that stack, the first object get's "stuck," on the stack. What's I think is happening is the object's collision system checks and resolves for collisions based on creation time, so If I check one axis, then the other, the object will "sink" object directly along the checking axis. This sinking action causes the collision detection routine to think there's a gap between our position and the other object's position, and when I finally check the object that I've already sunk into, my object's position is resolved to it's original position... All this is great, and I'm sure if I bang my head against a wall long enough i'll come up with a working algorithm, but I'd rather not =). So what in the heck do you think I should do? How could I change my collision resolution system to fix this? Here's the program (temporary link, not sure how long it'll last) (notes: arrow keys to navigate, click to drop block, x to jump) I'd appreciate any help you can offer!

    Read the article

  • Cisco VPN and .pcf file

    - by yael
    I have few of profiles .pcf files , and I used them in order to automate the vpmclient connection VIA CLI command I have WIN XP server for example vpmclient connect "customor_alpha" until now everything is ok but I have problem with the last of my profiles - area1.pcf the problem is when I type in CMD window the following ( to create VPN connection ) vpmclient connect "area1" after 2 second CISCO window will pop up and ask for password , ( username already defined in window ) please advice what could be the problem , why I get the "CISCO PVN window" ? or maybe I have some in correct syntax in my .pcf file , I checked the .pcf file again and again and I couldn't find the problem ? example of area1.pcf ( only example - not my real pcf ) [main] Description=connection to TechPubs server Host=10.10.99.30 AuthType=1 GroupName=docusers GroupPwd= enc_GroupPwd=158E47893BDCD398BF863675204775622C49<SNIPPED> EnableISPConnect=0 ISPConnectType=0 ISPConnect= ISPCommand= Username=alice SaveUserPassword=0 UserPassword= enc_UserPassword= NTDomain= EnableBackup=1 BackupServer=Engineering1, Engineering2, Engineering 3, Engineering4 EnableMSLogon=0 MSLogonType=0 EnableNat=1 EnableLocalLAN=0 TunnelingMode=0 TCPTunnelingPort=10000 CertStore=0 CertName= CertPath= CertSubjectName SendCertChain=0 VerifyCertDN=CN=”ID Cert”,OU*”Cisco”,ISSUER-CN!=”Entrust”,ISSURE-OU!*”wonderland” DHGroup=2 PeerTimeOut=90 ForceNetLogin=

    Read the article

  • Hyper-V 2008 R2 Install Question

    - by Bill
    I have 500GB HDD that I installed on the server. If I am going to load Hyper-V r2 on the bare system, do I set the partition to use all this space or is there a recommend smaller partition size I should set for Hyper-V to run within? This is my first time loading Hyper-V bare to the system. I feel like I should be able to create a small partition of like 40GB for Hyper-V to run within. Then create a second larger partion to store my VM images. Any thoughts or guidance on this?

    Read the article

  • Is what someone publishes on the Internet fair game when considering them for employment as a programmer?

    - by Jon Hopkins
    (Originally posted on Stack Overflow but closed there and more relevant for here) So we first interviewed a guy for a technical role and he was pretty good. Before the second interview we googled him and found his MySpace page which could, to put it mildly, be regarded as inappropriate. Just to be clear there was no doubt that it was his page (name, photos, matching biographical information and so on). The content was entirely personal and in no way related to his professional abilities or attitude. Is it fair to consider this when thinking about whether to offer them a job? In most situations my response would be what goes on in someone's private life is their own doing. However for anyone technical who professes (implicitly or explicitly) to understand the Internet and the possibilities it offers, is posting things in a way which can so obviously be discovered a significant error of judgement? EDIT: Clarification - essentially it was a fairly graphic commentary on porn (but of, shall we say, a non-academic nature). I'm actually more interested in the general concept than the specific incident as it's something we're likely to see more in the future as people put more and more of themselves on-line. My concerns are not primarily about him and how he feels about such things (he's white, straight, male and about the last possible victim of discrimination on the planet in that sense), more how it reflects on the company that a very simple search (basically his name) returns these things and that clients may also do it. We work in a relatively conservative industry.

    Read the article

  • Authentication system brainstorm

    - by gansbrest
    Hi. We got multiple small websites (microsites) and one main high traffic one with big users base. Right now the requirement is to build authentication system which should allow users to loign with the same identity across the network. All website are running on different domains, powered by Drupal 6 CMS and have separate databases (so sharing tables with prefix is not an option + it creates a huge mess in the db). Here is the set of core requirements I came up with: Users should be able to login with the same credentials to all sites within the network User’s data sharing between Main site (storage) and all micro sites within the network Data synchronization across the network when user changes the data (update email or password for example) The login/registration process should be seamless and consistent Register on any of the sites across the network and use that identity to login later on. In the future there might be a need to add openid authentication options. Basically we are looking at something similar stackexchange does, but not sure if they have central users base on not. I was thinking about custom solution which will include 2 parts (modules), one will be stored on the Main site for users data storing and responding to requests from clients. Second part (module) will be placed on each microsite, which is going to send requests to the Master. Some kind of client - server setup. One of the complications I see right away is #3. Data Synhcronization across the network. I just don't want to reinvent the wheel and maybe some work is already done in this direction. Looking forward to your ideas on how to approach this project. EDIT: We use MySQL database

    Read the article

  • Get sessions' remote IP from Teamviewer log file

    - by etuardu
    I'd like to know who has logged in to my machine and when. I have two TeamViewer log files: Connections_incoming.txt and TeamViewer7_Logfile.log. The first one is quite plain and lists, as its name says, the incoming connections to the machine, reporting the local name of the remote host, login time, logout time, and some ids. e.g.: 173274362 MYLAPTOP 20-02-2012 17:32:16 20-02-2012 17:50:42 Master RemoteControl {C5AAE483-ED0B-54B8-9235-7AE597CAD342} This is almost all what I need, but unfortunately no remote IP address is reported here, so I checked for IPs in TeamViewer7_Logfile.log but it is really messy. It indeed contains some IP addresses but I can't understand which one is bound with the items in the first log file. Is there a way to interpolate the two logs to get what I need? Should I search the second file for some particular text? What do you suggest?

    Read the article

  • Most efficient Way to setup a game server

    - by alex bowers
    I'm running a PHP based game which has over 45 Million members predicted for end of this year (2011) Currently we are on 7.5 Million, this game is being ran on facebook and I am in desperate need to help get this game server as efficient and as powerful as possible. it is a dedicated server with Processor Manufacturer Intel Model i7 920 Frequency 4x 2x 2.66 GHz NIC GigaEthernet RAM 12 GB Hard disk 4 x 1 TB specs. It has apache installed, cPanel, phpMyAdmin, several apache mods and MySQL. The game also runs 47 mysql calls per second per user. Is there any alternatives to the above which could be faster, more efficient etc? I dont mind having to recode the game to fit to it, as long as it maximises our upper limit of members on the game. Thanks Also, is there a way to tell what our maximum limit to players, database calls etc is? Thank you again, hope you guys can help :)

    Read the article

  • How to Properly Make use of Codeigniter's HMVC

    - by Branden Stilgar Sueper
    I have been having problems wrapping my brain around how to properly utilize the modular extension for Codeigniter. From what I understand, modules should be entirely independent of one another so I can work on one module and not have to worry about what module my teammate is working on. I am building a frontend and a backend to my site, and am having confusion about how I should structure my applications. The first part of my question is should I use the app root controllers to run modules, or should users go directly to the modules by urls? IE: in my welcome.php public function index() { $this->data['blog'] = Modules::run( 'blog' ); $this->data['main'] = Modules::run( 'random_image' ); $this->load->view('v_template', $this->data); } public function calendar() { $this->data['blog'] = Modules::run( 'blog' ); $this->data['main'] = Modules::run( 'calendar' ); $this->load->view('v_template', $this->data); } My second part of the question is should I create separate front/back end module folders -config -controllers welcome.php -admin admin.php -core -helpers -hooks -language -libraries -models -modules-back -dashboard -logged_in -login -register -upload_images -delete_images -modules-front -blog -calendar -random_image -search -views v_template.php -admin av_template.php Any help would be greatly appreciated.

    Read the article

  • How to implement physical effect, perspective effect on Android

    - by asedra_le
    I'm researching about 2D game for Android to implement an Android Game Project. My project looks nearly like PaperToss. Instance of throwing a page, my game will throw a coin. Suppose that I have a coin put in three-dimensional that have coordinates at A(x,y,z). I throw that point ahead, after 1/100 second, that coin move from A(x,y,z) to A'(x',y',z'). By this way, I have two problems need to solve. Determine the formulas can be used to compute the coordinates of the coin at time t. This problem is under-researching. I have no idea to solve this problem. Mapping three-dimensional points to a two-dimensional and use those new coordinates (a two-dimensional coordinates) to draw our coin on screen. I have found two solutions for this problem: Orthographic projection & Perspective projection However, my old friend said that OpenGL supports to solve problems like my problems. Any body have experiences about my problems? Help me please :) Thank for reading my question.

    Read the article

  • How can artificially create a slow query in mysql?

    - by Gray Race
    I'm giving a hands on presentation in a couple weeks. Part of this demo is for basic mysql trouble shooting including use of the slow query log. I've generated a database and installed our app but its a clean database and therefore difficult to generate enough problems. I've tried the following to get queries in the slow query log: Set slow query time to 1 second. Deleted multiple indexes. Stressed the system: stress --cpu 100 --io 100 --vm 2 --vm-bytes 128M --timeout 1m Scripted some basic webpage calls using wget. None of this has generated slow queries. Is there another way of artificially stressing the database to generate problems? I don't have enough skills to write a complex Jmeter or other load generator. I'm hoping perhaps for something built into mysql or another linux trick beyond stress.

    Read the article

  • Is defining every method/state per object in a series of UML diagrams representative of MDA in general?

    - by Max
    I am currently working on a project where we use a framework that combines code generation and ORM together with UML to develop software. Methods are added to UML classes and are generated into partial classes where "stuff happens". For example, an UML class "Content" could have the method DeleteFromFileSystem(void). Which could be implemented like this: public partial class Content { public void DeleteFromFileSystem() { File.Delete(...); } } All methods are designed like this. Everything happens in these gargantuan logic-bomb domain classes. Is this how MDA or DDD or similar usually is done? For now my impression of MDA/DDD (which this has been called by higherups) is that it severely stunts my productivity (everything must be done The Way) and that it hinders maintenance work since all logic are roped, entrenched, interspersed into the mentioned gargantuan bombs. Please refrain from interpreting this as a rant - I am merely curious if this is typical MDA or some sort of extreme MDA UPDATE Concerning the example above, in my opinion Content shouldn't handle deleting itself as such. What if we change from local storage to Amazon S3, in that case we would have to reimplement this functionality scattered over multiple places instead of one single interface which we can provide a second implementation for.

    Read the article

  • Writing an internal disk from IMG, what XP software to use?

    - by Andrew Swift
    I am trying to install the Chromium OS on an EEE PC 901, and I have succeeded in using Image Writer for Windows 0.2r23 to copy the IMG file to an SDHC card. Since the OS speed is limited by slow card access, I'd like to install the Chromium OS on the second, unused, internal SSD Drive, D:. However, Image Writer doesn't allow me to restore an internal drive from an IMG file. To be clear: I boot in XP on C: then run Image Writer to install the Chromium OS. Does anyone know how I can either convince Image Writer that D: is a removable drive or know of alternative program that will let me restore D: from an IMG file (non-windows file system)?

    Read the article

  • Two equal items in alternatives list

    - by Red Planet
    I want to have two JDKs. The first one was installed a long time ago to /usr/lib/jvm/java-7-oracle/. I installed the second version and executed following commands to add it to alternatives: red-planet@laptop:~$ sudo update-alternatives --install "/usr/bin/java" "java" "/opt/java_1.6.0_35/bin/java" 2 update-alternatives: using /opt/java_1.6.0_35/bin/java to provide /usr/bin/java (java) in auto mode. red-planet@laptop:~$ sudo update-alternatives --install "/usr/bin/javac" "javac" "/opt/java_1.6.0_35/bin/javac" 2 update-alternatives: using /opt/java_1.6.0_35/bin/javac to provide /usr/bin/javac (javac) in auto mode. red-planet@laptop:~$ sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/opt/java_1.6.0_35/bin/javaws" 2 update-alternatives: using /opt/java_1.6.0_35/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode. And configured: There are 2 choices for the alternative java (providing /usr/bin/java). Selection Path Priority Status ------------------------------------------------------------ * 0 /opt/java_1.6.0_35/bin/java 2 auto mode 1 /opt/java_1.6.0_35/bin/java 2 manual mode 2 /usr/lib/jvm/java-7-oracle/bin/java 1 manual mode Press enter to keep the current choice[*], or type selection number: Why do I have two equal items in the list?

    Read the article

  • Gemalto Mobile Payment Platform on Oracle T4

    - by user938730
    Gemalto is the world leader in digital security, at the heart of our rapidly evolving digital society. Billions of people worldwide increasingly want the freedom to communicate, travel, shop, bank, entertain and work – anytime, everywhere – in ways that are convenient, enjoyable and secure. Gemalto delivers on their expanding needs for personal mobile services, payment security, identity protection, authenticated online services, cloud computing access, eHealthcare and eGovernment services, modern transportation solutions, and M2M communication. Gemalto’s solutions for Mobile Financial Services are deployed at over 70 customers worldwide, transforming the way people shop, pay and manage personal finance. In developing markets, Gemalto Mobile Money solutions are helping to remove the barriers to financial access for the unbanked and under-served, by turning any mobile device into a payment and banking instrument. In recent benchmarks by our Oracle ISVe Labs, the Gemalto Mobile Payment Platform demonstrated outstanding performance and scalability using the new T4-based Oracle Sun machines running Solaris 11. Using a clustered environment on a mid-range 2x2.85GHz T4-2 Server (16 cores total, 128GB memory) for the application tier, and an additional dedicated Intel-based (2x3.2GHz Intel-Xeon X4200) Oracle database server, the platform processed more than 1,000 transactions per second, limited only by database capacity --higher performance was easily achievable with a stronger database server. Near linear scalability was observed by increasing the number of application software components in the cluster. These results show an increase of nearly 300% in processing power and capacity on the new T4-based servers relative to the previous generation of Oracle Sun CMT servers, and for a comparable price. In the fast-evolving Mobile Payment market, it is crucial that the underlying technology seamlessly supports Service Providers as the customer-base ramps up, use cases evolve and new services are launched. These benchmark results demonstrate that the Gemalto Mobile Payment Platform is designed to meet the needs of any deployment scale, whether targeting 5 or 100 million subscribers. Oracle Solaris 11 DTrace technology helped to pinpoint performance issues and tune the system accordingly to achieve optimal computation resources utilization.

    Read the article

  • Windows Server 2008R2 IIS7.5, Requests getting 401 status

    - by TLBH
    We have a web site running in windows server 2008r2 iis7.5 and we are seeing several errors reported from our global error handler saying that "Request Timed Out". I matched up one to the IIS log file and see the request took 135116 (presumably milliseconds) had an sc-status of 401 an sc-sub0status of 0 and an sc-win32-status of 64. 2 requests failed in these way but lots of surrounding requests (1979 successful requests vs 2 fails) for the same user went through perfectly fine- with the same cs-username which makes a 401 seem a little odd. The target of the requests is an ASP.Net web service's web method called by the .net client library- it's called a lot of times per user (3 times per second) to keep a page updated. We're getting some users reportng seeing a freezing effect and I think this may be the cause, any ideas? Peter

    Read the article

< Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >