Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 454/956 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • ORA-01019 error only as an administrator

    - by Mike
    I'm having a strange problem. I've installed the Oracle 10g client on a terminal server running Windows Server 2008R2. When I try to connect to Oracle, using, say, Toad, I receive the error "ORA-01019 unable to allocate memory in the user side". But this only happens if I'm logged in as an administrator. If I connect as a normal user, I can connect without issue. Also -- if a normal user is connected, I can then connect without a problem as an administrator. Any thoughts?

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • samba shares dissapear everynight

    - by Crash893
    I have ubuntu 8.04lts and recently a weird problem has been cropping up. every night something happens and in the morning my coworkers cant see the shares. If i try to remote into the machine via ssh i don't get a prompt . when i rebooted the machine i would get a "video cannot be displayed in this mode" screen and no other activity on the box. I booted from grub into recovery and tried doing a package repair (keeping my smb.conf) and that didn't seem to do anything after a few other reboots I was able to get it to come up (im not sure what i did) yesterday it did teh same thing i booted to recovery then did a repair xserver and it came right up so i thought that resovled the issue but then today same thing anyone have any idea on what i can look for (im very new to linux in general) worst case sennerio can i just reinstall ubuntu over again with out blowing out the data?

    Read the article

  • After logging out of SSH, screen sessions disappear on Arch Linux

    - by Ivan
    On Arch Linux (I'm on a single dedicated server, where my domain name points to only one IP), when I SSH into a user (say, for example, user mc), and then do screen -S test (or -dmS, the resulting issue is the same), run a command, and then detach from it, then exit out of my SSH session, and log back in, the screen session disappears. screen -ls returns No Sockets found in /run/screens/S-mc. The only way I can reattach to my sessions is if I never logged out of my SSH. How do I fix this? I do have read/write access in /run/screens/S-mc I detach from screen sessions with Ctrl-A,D disown -a && exit gives me the same problem shopt huponexit returns "huponexit off" There is no ~/.logout, and ~/.bash_logout is empty, with 3 lines of comments, telling me it's the ~/.bash_logout file ls -l /usr/bin | grep screen returns lrwxrwxrwx 1 root root 12 Oct 31 2012 screen -> screen-4.0.3 -rwsr-xr-x 1 root root 363672 Oct 31 2012 screen-4.0.3

    Read the article

  • Recommendations for Spam Filter

    - by dotdev
    We are currently using MxGuardDog for spam filtering. It works by pointing our MX records at their mail servers. The service seems pretty good, it keeps out the obvious spam, but I would still say it let's through mail that to me is spam, but I accept that on the surface those emails may not flag any of the universally recognised indicators for spam. If an email comes through that I believe is spam, I can login to the Web Console and blacklist the email/domain. However, 99% of the time I don't because it's inconvenient - or, should i say, it's far less convenient than a button in Outlook that allows me to report the email/domain as spam. So, what we're looking for is a similar service i.e. cloud spam filtering that has an Outlook plugin so that Administrators/Users can report spam. We are only a small company, 10 users, so cost is of course an issue for us. Many thanks dotdev

    Read the article

  • How can I convince my boss to invest into the developer environment?

    - by user95291
    Our boss said that developers should have fewer mistakes so the company would have money for displays, servers etc. An always mentioned example is a late firing of an underperforming colleague whose salary would have covered some of these expenses. On the other hand it happened a few times that it took a few days to free up some disk space on our servers since we can't get any more disk. The cost of mandays was definitely higher than the cost of a new HDD. Another example is that we use 14-15" notebooks for development and most of the developers get external displays after they spent one year at the company. The price of a 22-24" display is just a small fraction of a developers annual salary. Devs say that they like the company because of other reasons (high quality code, interesting projects etc.) but this kind of issues not just simply time-consuming but also demotivate them. In the point of view of the developers it seems that the boss always can find an issue in the past which they could have been done better so it's pointless to work better to get for a second display/HDD/whatever. How can I convince my boss to invest more into development environment? Is it possible to break this endless loop?

    Read the article

  • Why do I need to create a bios-grub partition when I install 12.04?

    - by raj
    Is the bios-grub partition in Ubuntu 12.04 mandatory? I have used 11.04, 11.10 and 12.04, But I was never asked for this. Today I tried a fresh installation of Ubuntu 12.04 and for the first time I was asked for this Grub partition of minimum 1Mb. I first tried to reinstall 12.04, but the error continued. So I installed Fedora 16, Keeping all partitions as they were (replaced Ubuntu with Fedora), And then did another fresh installation of 12.04. Is it ok to continue with this grub partition or is there a fault in my system's hardware? If this is a (hardware) fault, how can I fix it? I'm using a Lenovo S10-2 Ideapad. The only OS right now installed is Ubuntu 12.04. well, let me answer. It was /usr/bin/xorg issue that I had with firstly installed precise. I used fedora16 basically for removing precise totally (my experience tells me ubuntu can't completely erase and reinstall by itself). this 1mb grub is created by fedora. I then wanted to remove it while reinstalling ubuntu but got caution bootloader may fail. hence I have to keep this 1mb drive. but prior to yesterday, i used both fedora and ubuntu, even same CDs, but had no such partition. my question is if this partition is necessary or not? if not, how can i safely remove it from my system? Am using only ubuntu 12.04 -- before and after (now).

    Read the article

  • Exchange 2003 ActiveSync fine but GAL lookups not working

    - by Daniel Lucas
    We have Exchange 2003 SP2 and use ActiveSync for our mobile devices (iOS and sever Android versions). Everything works except for GAL lookups and we've confirmed that the devices and client versions we are trying it on are supposed to support it. The symptom is different depending on the client, but there is never an error. In a nutshell, lookups return no results. This was working previously and we aren't sure what has changed. Are there logs in Exchange or IIS that will allow me to see GAL lookup transactions? Could it be a GAL permissions issue? (No issues with GAL in Outlook) Could it have been caused by upgrading our DCs to 2008 and increasing the forest/domain functional levels?

    Read the article

  • django fcgi - call a management command with subprocess.Popen

    - by user41855
    Hi, I'm using an app called django-chronograph. It has a code of line which works in my dev environment and does not work in production: p = subprocess.Popen(['python', get_manage_py(), 'run_job', str(self.pk)]) This line crashes in production with: unknown command run_job Whereas when I run directly from command line: manage.py run_job It works fine. Interestingly it worked once when we exchanged 'python' with 'usr/bin/python'. then we restarted the server once more and it was back to old behaviour. Thus it seems as we have a python path issue. I'm not the guy who is running the server, its my app that should run and it would be great to get some help here. Attention: I'm a total noob regarding server-administration.. server environment: NGINX with FCGI-Daemon FCGI in prefork-mode

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • How or why would this mechanic (not) work to bring game balance to a singleplayer RPG? [closed]

    - by 0xFFF1
    Mechanic details The player, the monsters, and the merchants act as three separate parties. The player needs to beat up monsters for exp points and resources to sell and to buy potions from merchants to continue to fight. The monsters need healing and reviving to survive (also bought from merchants) and the merchants need potion ingredients from the player and the monsters to make potions to sell. These potions are only able to be processed in such bulk by merchants thus their potions would be cheaper than making them yourself. Only the monsters can farm ingredients in bulk. Only the player is or has to be overly aggressive (in bulk). Monsters can farm and produce "Level up candies" that do the work of exp. they are eaten right away after they are made and are never stockpiled or held for fear of the player and merchants who want to sell to the player. The monsters will defend themselves. Reviving is very expensive. The merchants can be found either with a concerned expression or a grinning expression based on how much profit they are making compared to their morale standing. The economies of each monster town and merchant city are distinct but interconnected. Magic Swords are worth a lot. So what I need to know is what concerns would there be to design a game around this mechanic and/or design this mechanic around a developing game. which would fare better? Is game balance an issue here? (how strong the monsters get or how quickly they die off based on the player's input into the system), Or is game balance solely in the hands of the player? (he decides if he overkills monsters or get underleveled.) What do I need to think about to make sure it isn't too easy or too hard to swing the amount/strength of monsters compared to the player and the amount of profit the merchants get vs the player. Would indicating how out of whack things are getting in game help with this?

    Read the article

  • Clonezilla is not able to clone a RAID 1 disk

    - by Adrian
    I have a HP Server DL320 G5. There are two SATA hard disks configured as RAID 1 through HP embedded RAID controller. Server OS is running GNU/Linux (Fedora) Server booted up with clonezilla live CD. The image will be stored on a NAS connected through NFS. Clonezilla could mount the NFS share and could see the two hard disks /dev/sda and /dev/sdb. I selected /dev/sda for disk cloning. However I could not see the cloning progress and got straight into a prompt for reboot, poweroff, command line I tried to select /dev/sdb but the same issue.

    Read the article

  • Control truetype font tracking in fontforge

    - by ??O?????
    Hello. I'm designing a font intended to be used mainly as a subtitle font. It's a sans-serif font not unlike Helvetica. The em size is 2000 (I know it "should" be 2048 for a truetype font), the ascent/descent are 1600/400. Most glyphs don't go higher than 1450 though, with the exception of accented Latin characters. There lies the issue: when my font includes accented Latin characters, the tracking (line -of-text intradistance, but correct me if I got the term "tracking" wrong) skyrockets. I can't seem to find a way to control that. As my intended design goes, I don't mind if I have a descender (e.g. "g") above and an accented capital Latin letter below and they almost touch. It's something that does not happen very often, anyway. How can I control tracking? It seems it's auto-calculated.

    Read the article

  • Compiling LaTeX document makes Google Drive crash

    - by Sander
    I've the issue that when I compile a LaTeX document that is located inside my Google Drive this will after a few turns make the OSX Google Drive application crash. As this is an important document I want to keep it all the time inside the Google Drive location to ensure cloud backup but this ofcourse is not guaranteed if this makes my Google Drive crash all the time. I don't seem te be able to identify what is causing this and I was hoping that maybe some people here have any idea what might cause this? We're talking about a 8 pages document with 3 images, so nothing crazy big or complex.

    Read the article

  • MWS2K8R2: Enabling Media Sharing using Streaming Media Services Role

    - by TheLizardKing
    So I have a Microsoft Windows Server 2008 R2 that stores a large collection of media (mostly mp3s) and I want to be able to deliver these files using a server/client setup with Windows Media Player being the client. I downloaded and installed Streaming Media Services Role. I even setup a publishing point with on-demand access. My issue is I can connect using WMP12 but it only connects as more of a stream and not a shared library. I can pause/play/skip as if it's a powerful radio station which is ok in my book but what I'd really like to do is allow me to control my music remotely, search and play for artists, maybe create playlists (not required but nice) and even connect it to an xbox. Is Streaming Media Services Role not what I should be using for this? Would installing WMP and sharing using that mechanism be a better option? Any Ideas?

    Read the article

  • Simulating an UNC path with a leading dot

    - by Uwe Keim
    Being a C# .NET Windows Forms developer, some customers are running our applications on an Apple OS X Mac inside a Parallels virtual machine. Parallels presents host folders to the guest Windows as UNC paths with a leading dot like \\.psf\Home\Some\More\Folders Now an application of us cannot handle the leading dot correctly when accessing files from these kind of shares ("Invalid URI, cannot analyze host name" exception). I want to debug and fix this issue, unfortunately I do have no Mac and Parallels around here to test it. My question is: Is there a way to "simulate" this kind of share on a normal Windows server or client so that I'll be able to debug my application with Visual Studio? What I tried so far: I already tried to edit my HOSTS file to contain an entry like # ... 127.0.0.1 .psf # ... but Windows just seems to not recognize the share at all.

    Read the article

  • ApplicationPool in IIS 7.5 crashes with identifty set to NetworkService

    - by Ravi
    We have a web application running on IIS 7.5 with the identity of the Custom Application Pool set to NetworkService. This was working fine for some days and now the application pool has gone in to a stopped state. The following error message is displayed in the Event Viewer Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bcd2b Faulting module name: ntdll.dll, version: 6.1.7600.16559, time stamp: 0x4ba9b29c Exception code: 0xc0000005 Fault offset: 0x00038c19 Faulting process id: 0xa28 Faulting application start time: 0x01cbb2e5707aa2b2 Faulting application path: C:\Windows\SysWOW64\inetsrv\w3wp.exe Faulting module path: C:\Windows\SysWOW64\ntdll.dll Report Id: ae3f0610-1ed8-11e0-abf8-000c297f918f We are able to start the application pool only after changing the identity to LocalSystem. Why does the application pool fails to run with identity set to NetworkService. Can any one help us resolving this issue?

    Read the article

  • Problem with testsaslauthd and kerberos5 ("saslauthd internal error")

    - by danorton
    The error message “saslauthd internal error” seems like a catch-all for saslauthd, so I’m not sure if it’s a red herring, but here’s the brief description of my problem: This Kerberos command works fine: $ echo getprivs | kadmin -p username -w password Authenticating as principal username with password. kadmin: getprivs current privileges: GET ADD MODIFY DELETE But this SASL test command fails: $ testsaslauthd -u username -p password 0: NO "authentication failed" saslauthd works fine with "-a sasldb", but the above is with "-a kerberos5" This is the most detail I seem to be able to get from saslauthd: saslauthd[]: auth_krb5: krb5_get_init_creds_password: -1765328353 saslauthd[]: do_auth : auth failure: [user=username] [service=imap] [realm=] [mech=kerberos5] [reason=saslauthd internal error] Kerberos seems happy: krb5kdc[](info): AS_REQ (4 etypes {18 17 16 23}) 127.0.0.1: ISSUE: authtime 1298779891, etypes {rep=18 tkt=18 ses=18}, username at REALM for krbtgt/DOMAIN at REALM I’m running Ubuntu 10.04 (lucid) with the latest updates, namely: Kerberos 5 release 1.8.1 saslauthd 2.1.23 Thanks for any clues.

    Read the article

  • External mouse clicks not working during UAC prompt/windows or other elevated privilege programs

    - by user26453
    I have a Logitech Revolution VX, a small portable mouse. I use it on my laptop running Windows 7 64bit Professional. When a UAC prompt comes up, I can use the mouse to move the pointer, however any clicking via the external mouse is disabled. I have to use my trackpad to actually click. This continues on into whatever UAC window comes next, such as a program install. The mouse pointer can move via my external mouse, but no button clicks register except on the trackpad. This also happens if I am to right click a program and select "Run as Administrator". The external mouse will move the pointer, but no button clicks work. I've tried to google for this issue but haven't found anything. To be it seems generally a problem with my external mouse driver not running with elevated privileges itself. Any help would be appreciated.

    Read the article

  • Puppet, Nagios, Munin on cPanel based hosts

    - by WinkyWolly
    I've been managing 20-30~ cPanel based hosts over the past year with Puppet, Nagios and Munin for general monitoring / trending however a lot of the methods I've had to use to deploy / manage things such as configurations a pain. For those of you who aren't familiar with cPanel - it adds a few things to yum exclude such as perl*, ruby* and so forth. This causes issues with me being able to bootstrap monitoring on a new server via Puppet (well via the Package type) due to a bunch of conflicts with installing via Yum. Now I could create a custom RPM for everything and remove certain dependencies from the spec file however I would like to avoid this if possible. Does anyone have any proposed functional ways to manage this sort of environment? Currently I install Puppet, Facter and Munin via RPM's and force install using --nodeps and such (since they're installed, just no the ones Yum wants). Nagios I installed manually from source at this time (likely will create RPM's however I want to tackle this general issue first).

    Read the article

  • still about perl vs python but (to me) slightly different from what has been asked [closed]

    - by B Chen
    Being a newbie to coding, I read from this site that Perl is still as viable as it has been, while Python, quoted from someone else's post, is good but just "snake oil" (not sure what this refers to exactly though). So from the responses in that post, I got the gist that Perl is good and worthy to learn. My question is - pardon me for phrasing it in this "non-programmer's" way - Which one should I learn FIRST? (I am actually currently learning R) Here below is the background info - (a) I will be using it mostly for data mining and statistics analysis (b) Will there be this "first" and "later" issue with learning either Perl or Python? That is, after I become competent with one language, would there be a need to learn the second one (for a similar task??) (c) If there should be circumstances where I must learn the second one, would learning Perl FIRST be better than learning Python? I hope to learn as much from exchanging info here, so please help provide with more than just "it depends" type of info. Great many thanks to all who choose to respond to my query.

    Read the article

  • why dstat failed with --tcp option?

    - by hugemeow
    why --tcp option failed? should i install some package? if yes, which package i should install? this question is not the same as http://stackoverflow.com/questions/10475991/tcp-sockets-double-messages dstat --tcp Module tcp has problems. (Cannot open file /proc/net/tcp6.) None of the stats you selected are available. what is the problem with module tcp, how to find the problem? what i should do in order to fix this issue?

    Read the article

  • Rainmeter customization issues, pretty basic

    - by user62865
    I'll admit it; I'm new to using Rainmeter. What I want to do is create two panels (one left, the other on the right) that mirror one another. One side I want to work for the Eastern US while the other for the UK. I want to keep digital clocks, the same style etc., on both sides with their respective time zone. On top of this I want to have weather reports for the whole day (Current weather and temp, morning, afternoon, and night) for each location. However, I have not been able to find any skins which allow this. For whatever reason every time I try to load a skin for a clock I can only load it for one clock of that type and I'm having the same issue with the weather skins. Could anyone please tell me what skins I should look at and how to get a mirrored look?

    Read the article

  • CodePlex Daily Summary for Wednesday, September 19, 2012

    CodePlex Daily Summary for Wednesday, September 19, 2012Popular ReleasesWinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.3: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...Python Tools for Visual Studio: 1.5 RC: PTVS 1.5RC Available! We’re pleased to announce the release of Python Tools for Visual Studio 1.5 RC. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, etc. support. The primary new feature for the 1.5 release is Django including Azure support! The http://www.djangoproject.com is a pop...Launchbar: Lanchbar 4.0.0: First public release.AssaultCube Reloaded: 2.5.4 -: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we try to package for those OSes. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) Changelog: New logo Improved airstrike! Reset nukes...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like OData, MongoDB, WebSQL, SqLite, Facebook or YQL. The library can be integrated with Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video Sencha Touch 2 example app using JayData: Netflix browser. What's new in JayData 1.2 For detailed release notes check the release notes. JayData core: all async operations now support promises JayDa...fastJSON: v2.0.5: 2.0.5 - fixed number parsing for invariant format - added a test for German locale number testing (,. problems)????????API for .Net SDK: SDK for .Net ??? Release 4: 2012?9?17??? ?????,???????????????。 ?????Release 3??????,???????,???,??? ??????????????????SDK,????????。 ??,??????? That's all.VidCoder: 1.4.0 Beta: First Beta release! Catches up to HandBrake nightlies with SVN 4937. Added PGS (Blu-ray) subtitle support. Additional framerates available: 30, 50, 59.94, 60 Additional sample rates available: 8, 11.025, 12 and 16 kHz Additional higher bitrates available for audio. Same as Source Constant Framerate available. Added Apple TV 3 preset. Added new Bob deinterlacing option. Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will keep running and continue pro...DNN Metro7 style Skin package: Metro7 style Skin for DotNetNuke 06.02.01: Stabilization release fixed this issues: Links not worked on FF, Chrome and Safari Modified packaging with own manifest file for install and source package. Moved the user Image on the Login to the left side. Moved h2 font-size to 24px. Note : This release Comes w/o source package about we still work an a solution. Who Needs the Visual Studio source files please go to source and download it from there. Known 16 CSS issues that related to the skin.css. All others are DNN default o...Visual Studio Icon Patcher: Version 1.5.1: This fixes a bug in the 1.5 release where it would crash when no language packs were installed for VS2010.sheetengine - Isometric HTML5 JavaScript Display Engine: sheetengine v1.1.0: This release of sheetengine introduces major drawing optimizations. A background canvas is created with the full drawn scenery onto which only the changed parts are redrawn. For example a moving object will cause only its bounding box to be redrawn instead of the full scene. This background canvas is copied to the main canvas in each iteration. For this reason the size of the bounding box of every object needs to be defined and also the width and height of the background canvas. The example...VFPX: Desktop Alerts 1.0.2: This update for the Desktop Alerts contains changes to behavior for setting custom sounds for alerts. I have removed ALERTWAV.TXT from the project, and also removed DA_DEFAULTSOUND from the VFPALERT.H file. The AlertManager class and Alert class both have a "default" cSound of ADDBS(JUSTPATH(_VFP.ServerName))+"alert.wav" --- so, as long as you distribute a sound file with the file name "alert.wav" along with the EXE, that file will be used. You can set your own sound file globally by setti...MCEBuddy 2.x: MCEBuddy 2.2.15: Changelog for 2.2.15 (32bit and 64bit) 1. Added support for %originalfilepath% to get the source file full path. Used for custom commands only. 2. Added support for better parsing of Media Portal XML files to extract ShowName and Episode Name and download additional details from TVDB (like Season No, Episode No etc). 3. Added support for TVDB seriesID in metadata 4. Added support for eMail non blocking UI testCrashReporter.NET : Exception reporting library for C# and VB.NET: CrashReporter.NET 1.2: *Added html mail format which shows hierarchical exception report for better understanding.PDF Viewer Web part: PDF Viewer Web Part: PDF Viewer Web PartIIS Express Manager: IIS Express Manager v 0.5B: Several added features, including adding site and right click menu for sites; which allows you to start/stop site, view it directly in browser etc.Chris on SharePoint Solutions: View Grid Banding - v1.0: Initial release of the View Creation and Management Page Column Selector Banding solution.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.67: Fix issue #18629 - incorrectly handling null characters in string literals and not throwing an error when outside string literals. update for Issue #18600 - forgot to make the ///#DEBUG= directive also set a known-global for the given debug namespace. removed the kill-switch for disregarding preprocessor define-comments (///#IF and the like) and created a separate CodeSettings.IgnorePreprocessorDefines property for those who really need to turn that off. Some people had been setting -kil...Lakana - WPF Framework: Lakana V2: Lakana V2 contains : - Lakana WPF Forms (with sample project) - Lakana WPF Navigation (with sample project)Microsoft SQL Server Product Samples: Database: OData QueryFeed workflow activity: The OData QueryFeed sample activity shows how to create a workflow activity that consumes an OData resource, and renders entity properties in a Microsoft Excel 2010 worksheet or Microsoft Word 2010 document. Using the sample QueryFeed activity, you can consume any OData resource. The sample activity uses LINQ to project OData metadata into activity designer expression items. By setting activity expressions, a fully qualified OData query string is constructed consisting of Resource, Filter, Or...New ProjectsCachalote-Todo: Cachalote TodoCommerce Server Tools: A collection of tools and samples for Microsoft and Ascentium Commerce ServerCopiarArquivosNaRede: facilitate the process of copying files from one machine to host several machines,should be used primarily for network administrators and support teams.CS 3750 Team Ghana: Weber State University's Computer Science Department students are working on feature completing the Ghana Hospital inventory system in Asp.Net C#.CUDAFY TSP: Solving the Traveling Salesman Problem in C# on the GPGPU.Deixei Software Factory: Line Of Business application are one of the most important assets in a enterprise environment. You do not need to build the basis over and over. FocusMeter: Tiny tray application for tracking distractions.FrontTest: ????????generalshop: Host a number of ASP.NET controls I developed overtime: 1, RolloverImageButtonjschome: ??????。OVS Web App: OVS web appPeoplePicker Port Tester: The PeoplePicker Port Tester helps with troubleshooting PeoplePicker issues.Project Demo: hajshjdhfjkhdskfhdkhfkahdkjProyectoLenguaje2: proyecto de lenguaje de programación II Por: Moreira Kennedy Palma LuisSharePoint 2010 and 2013 jQuery / JS code samples: Various code samples for SharePoint 2010 and SharePoint 2013Strom: A projectsuperScheduler: A TDD project to help the contributors learn and have fun. The result should be something that looks like a cross platform task scheduler.......Troll Face SDK: Application project using the Face SDK for Windows Phone for demonstration purposeUseful PowerShell Scripts: Powerful & useful PowerShell scriptsWeiTalk: Sina weibo for Windows PhoneXML.NET Serializer: Striving for: - Great performance - Very easy to use - Very flexible and configurable - Ability to configure both via configuration file and/or attributes

    Read the article

  • Citrix ICA Client on Mac shift key doesn't work as expected

    - by brianegge
    When I'm connected to a Windows XP computer from my Mac OS X 10.6 using the Citrix ICA Client, it seems that the shift key only works for the first letter typed after pressing shift. In order to type multiple uppercase letters, like ICA, I must either press caps lock, or press and release shift before each character. I've tried switching between standard and enhanced keyboard, as well as the 'Send Special Keys Unchanged' option, but none of these seem to affect the issue. The problem doesn't occur when I switch from the Citrix window to a regular Mac window.

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >