Search Results

Search found 18220 results on 729 pages for 'null hypothesis'.

Page 415/729 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • Why unhandled exceptions are useful

    - by Simon Cooper
    It’s the bane of most programmers’ lives – an unhandled exception causes your application or webapp to crash, an ugly dialog gets displayed to the user, and they come complaining to you. Then, somehow, you need to figure out what went wrong. Hopefully, you’ve got a log file, or some other way of reporting unhandled exceptions (obligatory employer plug: SmartAssembly reports an application’s unhandled exceptions straight to you, along with the entire state of the stack and variables at that point). If not, you have to try and replicate it yourself, or do some psychic debugging to try and figure out what’s wrong. However, it’s good that the program crashed. Or, more precisely, it is correct behaviour. An unhandled exception in your application means that, somewhere in your code, there is an assumption that you made that is actually invalid. Coding assumptions Let me explain a bit more. Every method, every line of code you write, depends on implicit assumptions that you have made. Take this following simple method, that copies a collection to an array and includes an item if it isn’t in the collection already, using a supplied IEqualityComparer: public static T[] ToArrayWithItem( ICollection<T> coll, T obj, IEqualityComparer<T> comparer) { // check if the object is in collection already // using the supplied comparer foreach (var item in coll) { if (comparer.Equals(item, obj)) { // it's in the collection already // simply copy the collection to an array // and return it T[] array = new T[coll.Count]; coll.CopyTo(array, 0); return array; } } // not in the collection // copy coll to an array, and add obj to it // then return it T[] array = new T[coll.Count+1]; coll.CopyTo(array, 0); array[array.Length-1] = obj; return array; } What’s all the assumptions made by this fairly simple bit of code? coll is never null comparer is never null coll.CopyTo(array, 0) will copy all the items in the collection into the array, in the order defined for the collection, starting at the first item in the array. The enumerator for coll returns all the items in the collection, in the order defined for the collection comparer.Equals returns true if the items are equal (for whatever definition of ‘equal’ the comparer uses), false otherwise comparer.Equals, coll.CopyTo, and the coll enumerator will never throw an exception or hang for any possible input and any possible values of T coll will have less than 4 billion items in it (this is a built-in limit of the CLR) array won’t be more than 2GB, both on 32 and 64-bit systems, for any possible values of T (again, a limit of the CLR) There are no threads that will modify coll while this method is running and, more esoterically: The C# compiler will compile this code to IL according to the C# specification The CLR and JIT compiler will produce machine code to execute the IL on the user’s computer The computer will execute the machine code correctly That’s a lot of assumptions. Now, it could be that all these assumptions are valid for the situations this method is called. But if this does crash out with an exception, or crash later on, then that shows one of the assumptions has been invalidated somehow. An unhandled exception shows that your code is running in a situation which you did not anticipate, and there is something about how your code runs that you do not understand. Debugging the problem is the process of learning more about the new situation and how your code interacts with it. When you understand the problem, the solution is (usually) obvious. The solution may be a one-line fix, the rewrite of a method or class, or a large-scale refactoring of the codebase, but whatever it is, the fix for the crash will incorporate the new information you’ve gained about your own code, along with the modified assumptions. When code is running with an assumption or invariant it depended on broken, then the result is ‘undefined behaviour’. Anything can happen, up to and including formatting the entire disk or making the user’s computer sentient and start doing a good impression of Skynet. You might think that those can’t happen, but at Halting problem levels of generality, as soon as an assumption the code depended on is broken, the program can do anything. That is why it’s important to fail-fast and stop the program as soon as an invariant is broken, to minimise the damage that is done. What does this mean in practice? To start with, document and check your assumptions. As with most things, there is a level of judgement required. How you check and document your assumptions depends on how the code is used (that’s some more assumptions you’ve made), how likely it is a method will be passed invalid arguments or called in an invalid state, how likely it is the assumptions will be broken, how expensive it is to check the assumptions, and how bad things are likely to get if the assumptions are broken. Now, some assumptions you can assume unless proven otherwise. You can safely assume the C# compiler, CLR, and computer all run the method correctly, unless you have evidence of a compiler, CLR or processor bug. You can also assume that interface implementations work the way you expect them to; implementing an interface is more than simply declaring methods with certain signatures in your type. The behaviour of those methods, and how they work, is part of the interface contract as well. For example, for members of a public API, it is very important to document your assumptions and check your state before running the bulk of the method, throwing ArgumentException, ArgumentNullException, InvalidOperationException, or another exception type as appropriate if the input or state is wrong. For internal and private methods, it is less important. If a private method expects collection items in a certain order, then you don’t necessarily need to explicitly check it in code, but you can add comments or documentation specifying what state you expect the collection to be in at a certain point. That way, anyone debugging your code can immediately see what’s wrong if this does ever become an issue. You can also use DEBUG preprocessor blocks and Debug.Assert to document and check your assumptions without incurring a performance hit in release builds. On my coding soapbox… A few pet peeves of mine around assumptions. Firstly, catch-all try blocks: try { ... } catch { } A catch-all hides exceptions generated by broken assumptions, and lets the program carry on in an unknown state. Later, an exception is likely to be generated due to further broken assumptions due to the unknown state, causing difficulties when debugging as the catch-all has hidden the original problem. It’s much better to let the program crash straight away, so you know where the problem is. You should only use a catch-all if you are sure that any exception generated in the try block is safe to ignore. That’s a pretty big ask! Secondly, using as when you should be casting. Doing this: (obj as IFoo).Method(); or this: IFoo foo = obj as IFoo; ... foo.Method(); when you should be doing this: ((IFoo)obj).Method(); or this: IFoo foo = (IFoo)obj; ... foo.Method(); There’s an assumption here that obj will always implement IFoo. If it doesn’t, then by using as instead of a cast you’ve turned an obvious InvalidCastException at the point of the cast that will probably tell you what type obj actually is, into a non-obvious NullReferenceException at some later point that gives you no information at all. If you believe obj is always an IFoo, then say so in code! Let it fail-fast if not, then it’s far easier to figure out what’s wrong. Thirdly, document your assumptions. If an algorithm depends on a non-trivial relationship between several objects or variables, then say so. A single-line comment will do. Don’t leave it up to whoever’s debugging your code after you to figure it out. Conclusion It’s better to crash out and fail-fast when an assumption is broken. If it doesn’t, then there’s likely to be further crashes along the way that hide the original problem. Or, even worse, your program will be running in an undefined state, where anything can happen. Unhandled exceptions aren’t good per-se, but they give you some very useful information about your code that you didn’t know before. And that can only be a good thing.

    Read the article

  • PuTTY/SSH: How to Prevent Auto-Logout?

    - by feklee
    My ISP's SSH server (Debian 2.0) logs me out after 35 minutes of inactivity, when connected with PuTTY (Windows XP). This is a big problem when I utilize the server for port-forwarding. The final messages displayed in the terminal: This terminal has been idle 30 minutes. If it remains idle for 5 more minutes it will be logged out by the system. Logged out by the system. PuTTY options that do not help: Sending of null packets to keep session active. Seconds between keepalives (0 to turn off): 30 [x] Enable TCP keepalives (SO_KEEPALIVE option) Any idea how to avoid the auto-log-out? Should I try another SSH client?

    Read the article

  • debian - running unattended-upgrades on a particular day of the week

    - by dastra
    We're running unattended-upgrades on debian squeeze, and would like it to run once a week, only on a Wednesday morning. To attempt this, we have set: APT::Periodic::Unattended-Upgrade "7" in /etc/apt/apt.conf.d/50unattended-upgrades And then touched the /var/lib/apt/periodic/update-stamp to set the timestamp to a Wednesday, for instance: touch -t 201211280000 /var/lib/apt/periodic/update-stamp Running: stamp=$(date --date=$(date -r /var/lib/apt/periodic/update-stamp --iso-8601) +%s 2/dev/null) date -u --date="1970-01-01 $stamp sec GMT" Gives the correct timestamp: Wed Nov 28 00:00:00 UTC 2012 However, unattended-upgrades then seems to ignore this, and run the updates on a Saturday morning. Could anyone enlighten me as to how this parameter works, and how to set up upgrades to run on a Wednesday?

    Read the article

  • force https with apache before .htpasswd

    - by johnlai2004
    I have this in my .htaccess file RewriteEngine On RewriteCond %{HTTPS} off RewriteRule ^(.*)$ https://www.myweb.com/phpmyadmin$1 [R,L] AuthUserFile /var/www/myweb/.htpasswd AuthGroupFile /dev/null AuthName "Sovereign Databases" AuthType Basic <Limit GET> require valid-user </Limit> But everytime I go to http://www.myweb.com/phpmyadmin, the .htpasswd prompts me for a credentials BEFORE i'm redirected to https://www.myweb.com/phpmyadmin. After I type in my username and password, I get redirected to https://www.myweb.com/phpmyadmin. The problem is that I don't want anyone to submit their username and password unencrypted via http. How do I force people to login via the https version even if they typed in the http version?

    Read the article

  • Oracle????????????FAQ

    - by Yusuke.Yamamoto
    ??? 1.??????????? ????????????????????????????????????????????? ????? Oracle Database ?SQL????????????????????? 2.?????????????????????? ??????????????????????????????????????? ????????????????????????????? ???:????????????? ???:??????NULL???????? ????:???·????????????????? ??????:CPU????????????I/O??????????? 3.???????????????????????? ????? Oracle Database ???????????????????????????????? ????????????????????????????????????????????????? ??? 4.????????????????????????????????????????? SYS????????????????·????????????????? ????????????????????????? ?????? ? DBA_TABLES ??????? ? DBA_INDEXES ?????? ? DBA_TAB_COLUMNS 5.?????????????????????????????????? ??????????????????????? ??????:Oracle Database ???????????????????????????????????????????????????????? ??????:?????????????????????????????????????? ????????:SQL????????????????????????????????????????????????????????????????? 6.?????????????????????????????????????????????????????? SYS????????????????·????????????????? ???????????????·?????"LAST_ANALYZED"??????????????????????????????????????????? 7.?????SQL?????????????????????SQL?????????????????????????????????? DBMS_STATS ????????????????????????????????????????? ?)??????????????? EXECUTE DBMS_STATS.GATHER_TABLE_STATS('SCOTT','EMP'); ?)?????????????????? EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS('SCOTT'); 8.???????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????? ??????:??????????????????????????????????? ??????:?????????????????????????????????? ????????:????????SQL???????????CPU??????? ???? ?????????????????????? ????????? Part1 ???????????????????·??????|??????????? "

    Read the article

  • Better way to generate enemies of different sub-classes

    - by KDiTraglia
    So lets pretend I have an enemy class that has some generic implementation and inheriting from it I have all the specific enemies of my game. There are points in my code that I need to check whether an enemy is a specific type, but in Java I have found no easier way than this monstrosity... //Must be a better way to do this if ( enemy.class.isAssignableFrom(Ninja.class) ) { ... } My partner on the project saw these and changed them to use an enum system instead public class Ninja extends Enemy { //EnemyType is an enum containing all our enemy types public EnemyType = EnemyTypes.NINJA; } if (enemy.EnemyType = EnemyTypes.NINJA) { ... } I also have found no way to generate enemies on varying probabilities besides this for (EnemyTypes types : enemyTypes) { if ( (randomNext = (randomNext - types.getFrequency())) < 0 ) { enemy = createEnemy(types.getEnemyType()); break; } } private static Enemy createEnemy(EnemyType type) { switch (type) { case NINJA: return new Ninja(new Vector2D(rand.nextInt(getScreenWidth()), 0), determineSpeed()); case GORILLA: return new Gorilla(new Vector2D(rand.nextInt(getScreenWidth()), 0), determineSpeed()); case TREX: return new TRex(new Vector2D(rand.nextInt(getScreenWidth()), 0), determineSpeed()); //etc } return null } I know java is a little weak at dynamic object creation, but is there a better way to implement this in a way such like this for (EnemyTypes types : enemyTypes) { if ( (randomNext = (randomNext - types.getFrequency())) < 0 ) { //Change enemyTypes to hold the classes of the enemies I can spawn enemy = types.getEnemyType().class.newInstance() break; } } Is the above possible? How would I declare enemyTypes to hold the classes if so? Everything I have tried so far as generated compile errors and general frustration, but I figured I might ask here before I completely give up to the huge mass that is the createEveryEnemy() method. All the enemies do inherit from the Enemy class (which is what the enemy variable is declared as). Also is there a better way to check which type a particular enemy that is shorter than enemy.class.isAssignableFrom(Ninja.class)? I'd like to ditch the enums entirely if possible, since they seem repetitive when the class name itself holds that information.

    Read the article

  • Logrotate, is this a proper config for what I want to do?

    - by Felthragar
    I started using logrotate a few days ago on a new server setup (actually three of them). My config is as follows. /var/www/mywebsite.com/logs/*.log { rotate 14 daily dateext compress delaycompress sharedscripts postrotate /usr/sbin/apache2ctl graceful > /dev/null endscript } Problem is that this is putting several days of logs into the same file. For example, I've currently got a file called access.log-20121005 which has logs for Oct 3rd, Oct 4th and Oct 5th in it. Is that proper behaviour? What I want for it to do is to create one logfile for each day and keep 14 days of logs. Any help appreciated, thanks.

    Read the article

  • ffmpeg add two audio streams to video

    - by Tossin Hausen
    I tried this: ffmpeg -i /sdcard/video/transcode/video.avi -map 0:0,0 -i /sdcard/video/transcode/first.mp3 -map 1:0,1 -i /sdcard/video/transcode/second.mp3 -map 2:0,2 -acodec copy -vcodec py /sdcard/video/transcode/Output.avi to add two audio streams to one video file. But ffmpeg says the number of mappings should match the number of output streams. What is wrong here? I'm trying to work with an Android build of FFmepg "ffmpeg for android beta". "Does not work" means that this uncommunicative Android build of FFmpeg just stops without giving any error message. The -codec copy option does not work with this build. Now I tried the same set of files with the FFmpeg called command line tool that comes with Ubuntu 10. Something (can't say where it is from). The -codec copy option does not work with this FFmpeg too. Here the complete output: m30x:~/movie/Film$ ffmpeg -i input.avi -i first.mp3 -i second.mp3 -map 0 -map 1 -map 2 -acodec copy -vcodec copy output.avi FFmpeg version SVN-r0.5.9-4:0.5.9-0ubuntu0.10.04.1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.9-0ubuntu0.10.04.1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jun 12 2012 16:27:34, gcc: 4.4.3 [NULL @ 0x93cfd10]looks like this file was encoded with (divx4/(old)xvid/opendivx) -> forcing low_delay flag Seems stream 0 codec frame rate differs from container frame rate: 30000.00 (30000/1) -> 25.00 (25/1) Input #0, avi, from 'input.avi': Duration: 01:30:33.00, start: 0.000000, bitrate: 901 kb/s Stream #0.0: Video: mpeg4, yuv420p, 576x432, 25 tbr, 25 tbn, 30k tbc Input #1, mp3, from 'first.mp3': Duration: 01:30:32.84, start: 0.000000, bitrate: 63 kb/s Stream #1.0: Audio: mp3, 22050 Hz, stereo, s16, 64 kb/s Input #2, mp3, from 'second.mp3': Duration: 01:30:32.84, start: 0.000000, bitrate: 63 kb/s Stream #2.0: Audio: mp3, 22050 Hz, stereo, s16, 64 kb/s Number of stream maps must match number of output streams Merging only one audio stream with the video stream works with Ubuntu and Android version of FFmpeg. Here the complete output: ffmpeg -i input.avi -i first.mp3 -map 0 -map 1 -acodec copy -vcodec copy output.avi FFmpeg version SVN-r0.5.9-4:0.5.9-0ubuntu0.10.04.1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.9-0ubuntu0.10.04.1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jun 12 2012 16:27:34, gcc: 4.4.3 [NULL @ 0x9bfad10]looks like this file was encoded with (divx4/(old)xvid/opendivx) -> forcing low_delay flag Seems stream 0 codec frame rate differs from container frame rate: 30000.00 (30000/1) -> 25.00 (25/1) Input #0, avi, from 'input.avi': Duration: 01:30:33.00, start: 0.000000, bitrate: 901 kb/s Stream #0.0: Video: mpeg4, yuv420p, 576x432, 25 tbr, 25 tbn, 30k tbc Input #1, mp3, from 'first.mp3': Duration: 01:30:32.84, start: 0.000000, bitrate: 63 kb/s Stream #1.0: Audio: mp3, 22050 Hz, stereo, s16, 64 kb/s Output #0, avi, to 'output.avi': Stream #0.0: Video: mpeg4, yuv420p, 576x432, q=2-31, 90k tbn, 25 tbc Stream #0.1: Audio: libmp3lame, 22050 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #1.0 -> #0.1 Press [q] to stop encoding frame= 6157 fps=6156 q=-1.0 size= 31667kB time=246.28 bitrate=1053.3kbits/s Do you have an idea why it does not work with two audio streams? By the way, ffmpeg -i input_with_first_audio_stream.avi -i second.mp3 -acodec copy -vcodec copy output_two_audio_streams.avi -newaudio works with both versions of ffmpeg that I use, but the first audio stream is played too fast (x10 or more), while the second audio stream is played correct. Many thanks in advance and sorry for my unconventional question and outdated versions of ffmpeg. But I am a lamer and it is not so easy for me to compile from the source (especially for the Android version). I will try to compile an up to date version of ffmpeg with Ubuntu, but I don't have much free time.

    Read the article

  • PCRE limits exceeded, but triggering rules are SQL related

    - by Wolfe
    [Mon Oct 15 17:12:13 2012] [error] [client xx.xx.xx.xx] ModSecurity: Rule 1d4ad30 [id "300014"][file "/usr/local/apache/conf/modsec2.user.conf"][line "349"] - Execution error - PCRE limits exceeded (-8): (null). [hostname "domain.com"] [uri "/admin.php"] [unique_id "UHx8LEUQwYEAAGutKkUAAAEQ"] And similar are spamming my error log for apache. It's only the admin side.. and only these two lines in the config: line 349: #Generic SQL sigs SecRule ARGS "(or.+1[[:space:]]*=[[:space:]]1|(or 1=1|'.+)--')" "id:300014,rev:1,severity:2,msg:'Generic SQL injection protection'" And line 356: SecRule ARGS "(insert[[:space:]]+into.+values|select.*from.+[a-z|A-Z|0-9]|select.+from|bulk[[:space:]]+insert|union.+select|convert.+\(.*from)" Is there a way to fix this problem? Can someone explain what is going on or if these rules are even valid to cause this error? I know it's supposedly a recursion protection.. but these protect against SQL injection so I'm confused.

    Read the article

  • SSIS Send Mail Task and ForceExecutionValue Error

    - by Kevin Shyr
    I tried to use the "ForcedExecutionValue" on several Send Mail Tasks and log the execution into a ExecValueVariable so that at the end of the package I can log into a table to say whether the data check is successful or not (by determine whether an email was sent out) I set up a Boolean variable that is accessible at the package level, then set up my Send Mail Task as the screenshot below with Boolean as my ForcedExecutionValueType.  When I run the package, I got the error described below. Just to make sure this is not another issue of SSIS having with Boolean type ( you also can't set variable value from xp_cmdshell of type Boolean), I used variables of types String, Int32, DateTime with the corresponding ForcedExecutionValueType.  The only way to get around this error, was to set my variable to type Object, but then when you try to get the value out later, the Object is null. I didn't spend enough time on this to see whether it's really a bug in SSIS or not, or is this just how Send Mail Task works.  Just want to log the error and will circle back on this later to narrow down the issue some more.  In the meantime, please share if you have run into the same problem.  The current workaround is to attach a script task at the end. Also, need to note 2 existing limitation: Data check needs to be done serially because every check needs to be inner join to a master table.  The master table has all the data in a single XML column and hence need to be retrieved with XQuery (a fundamental design flaw that needs to be changed) The next iteration will be to change this design into a FOR loop and pull out the checking query from a table somewhere with all the info needed for email task, but is being put to the back of the priority. Error Message: Error: 0xC001F009 at CountCheckBetweenODSAndCleanSchema: The type of the value being assigned to variable "User::WasErrorEmailEverSent" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object. Error: 0xC0019001 at Send Mail Task on count mismatch: The wrapper was unable to set the value of the variable specified in the ExecutionValueVariable property.   Screenshot of my Send Mail Task setup:

    Read the article

  • XNA CustomModelAnimationSample problem

    - by Mentoliptus
    I downloaded the official tutorial from:CustomModelAnimationSample It works fine but when I try to replicate it in my project, it fails to load the Tag property in my model. Is found that the probelm is in the line: skinnedModel = Content.Load<Model>("DudeWalk"); This line loads the model from the DudeWalk.fbx file and with the custom SkinnedModelProcessor. It loads the animations data in the model. After the line the Tag property is full. I stepped into the method and it went to the custom ModelData class. I copied everything from the projects CustomModelAnimationWindows and CustomModelAnimationPipeline to my solution and set all the references. I tried the same line of code and couldn't step in the method. It called the default method or model constructor and after the line the model's Tag propetry was null. I have to load the model through my custom SkinnedModelProcessor class, but how I tell the game to use this class? In the tutroail CustomModelClass the line is changed to: model = Content.Load<CustomModel>("tank"); So I assumed that I have to set the generic type to a custom model class, but the first example works without it. If anyone has some useful advice or some other helpful link, I'll be happy to try it.

    Read the article

  • What to Return with Async CRUD methods

    - by RualStorge
    While there is a similar question focused on Java, I've been in debates with utilizing Task objects. What's the best way to handle returns on CRUD methods (and similar)? Common returns we've seen over the years are: Void (no return unless there is an exception) Boolean (True on Success, False on Failure, exception on unhandled failure) Int or GUID (Return the newly created objects Id, 0 or null on failure, exception on unhandled failure) The updated Object (exception on failure) Result Object (Object that houses the manipulated object's ID, Boolean or status field to with success or failure indicated, Exception information if there was one, etc) The concern comes into play as we've started moving over to utilizing C# 5's Async functionality, and this brought the question up of how we should handle CRUD returns large-scale. In our systems we have a little of everything in regards to what we return, we want to make these returns standardized... Now the question is what is the recommended standard? Is there even a recommended standard yet? (I realize we need to decide our standard, but typically we do so by looking at best practices, see if it makes sense for us and go from there, but here we're not finding much to work with)

    Read the article

  • Allocating memory inside a function and returning it back

    - by user2651062
    I want to pass a pointer to my function and allocate the memory to which this pointer points. I've read in other posts that I should pass a double pointer to this function and I did so, but I keep getting segmentation fault: #include <iostream> #include <stdlib.h> using namespace std; void allocate(unsigned char** t) { *t=(unsigned char*)malloc(3*sizeof(unsigned char)); if(*t == NULL) cout<<"Allcoation failed"<<endl; else for(int m=0;m<3;m++) *(t[m])=0; } int main() { unsigned char* t; allocate(&t); cout<<t[0]<<" "<<t[1]<<endl; return 0; } the result is always this: Segmentation fault (core dumped) I don't think that there's anything missing from this code. What could be wrong?

    Read the article

  • google sitemap generator installation selinux

    - by adnan
    when i trying to install google sitemap generator i received this error Change security context of to system_u:object_r:httpd_modules_t install: WARNING: ignoring --context (-Z); this kernel is not SELinux-enabled Program files successfully copied. ./install.sh: line 488: 14284 Segmentation fault "$DEST_DIR/$BIN_DIR/$DAEMON_BIN" update_setting $update_setting_flags "apache_conf=$APACHE_CONF" "apache_group=$APACHE_GROUP" > /dev/null after choosing the submiting file settings i tried to unistall it & excute this getenforce try again but the same problem when i enter this dir /etc/sysconfig/selinux. it is not contain the selinux file my os centos 6 X86_64

    Read the article

  • What's so bad about pointers in C++?

    - by Martin Beckett
    To continue the discussion in Why are pointers not recommended when coding with C++ Suppose you have a class that encapsulates objects which need some initialisation to be valid - like a network socket. // Blah manages some data and transmits it over a socket class socket; // forward declaration, so nice weak linkage. class blah { ... stuff TcpSocket *socket; } ~blah { // TcpSocket dtor handles disconnect delete socket; // or better, wrap it in a smart pointer } The ctor ensures that socket is marked NULL, then later in the code when I have the information to initialise the object. // initialising blah if ( !socket ) { // I know socket hasn't been created/connected // create it in a known initialised state and handle any errors // RAII is a good thing ! socket = new TcpSocket(ip,port); } // and when i actually need to use it if (socket) { // if socket exists then it must be connected and valid } This seems better than having the socket on the stack, having it created in some 'pending' state at program start and then having to continually check some isOK() or isConnected() function before every use. Additionally if TcpSocket ctor throws an exception it's a lot easier to handle at the point a Tcp connection is made rather than at program start. Obviously the socket is just an example, but I'm having a hard time thinking of when an encapsulated object with any sort of internal state shouldn't be created and initialised with new.

    Read the article

  • Monitoring settings in a configsection of your app.config for changes

    - by dotjosh
    The usage:public static void Main() { using(var configSectionAdapter = new ConfigurationSectionAdapter<ACISSInstanceConfigSection>("MyConfigSectionName")) { configSectionAdapter.ConfigSectionChanged += () => { Console.WriteLine("File has changed! New setting is " + configSectionAdapter.ConfigSection.MyConfigSetting); }; Console.WriteLine("The initial setting is " + configSectionAdapter.ConfigSection.MyConfigSetting); Console.ReadLine(); } }  The meat: public class ConfigurationSectionAdapter<T> : IDisposable where T : ConfigurationSection { private readonly string _configSectionName; private FileSystemWatcher _fileWatcher; public ConfigurationSectionAdapter(string configSectionName) { _configSectionName = configSectionName; StartFileWatcher(); } private void StartFileWatcher() { var configurationFileDirectory = new FileInfo(Configuration.FilePath).Directory; _fileWatcher = new FileSystemWatcher(configurationFileDirectory.FullName); _fileWatcher.Changed += FileWatcherOnChanged; _fileWatcher.EnableRaisingEvents = true; } private void FileWatcherOnChanged(object sender, FileSystemEventArgs args) { var changedFileIsConfigurationFile = string.Equals(args.FullPath, Configuration.FilePath, StringComparison.OrdinalIgnoreCase); if (!changedFileIsConfigurationFile) return; ClearCache(); OnConfigSectionChanged(); } private void ClearCache() { ConfigurationManager.RefreshSection(_configSectionName); } public T ConfigSection { get { return (T)Configuration.GetSection(_configSectionName); } } private System.Configuration.Configuration Configuration { get { return ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); } } public delegate void ConfigChangedHandler(); public event ConfigChangedHandler ConfigSectionChanged; protected void OnConfigSectionChanged() { if (ConfigSectionChanged != null) ConfigSectionChanged(); } public void Dispose() { _fileWatcher.Changed -= FileWatcherOnChanged; _fileWatcher.EnableRaisingEvents = false; _fileWatcher.Dispose(); } }

    Read the article

  • Particle and Physics problem.

    - by Quincy
    This was originally a forum post so I hope you guys don't mind it being 2 questions in one. I am making a game and I got some basic physics implemented. I have 2 problems, 1 with particles being drawn in the wrong place and one with going through walls while jumping in corners. Skip over to about 15 sec video showing the 2 problems : http://youtube.com/watch?v=Tm9nfWsWfiM So the problem with the particles seems to be coming from the removal, as soon as I remove that piece of code it instantly works, but there shouldn't be a problem since they shouldn't even draw when their energy gets to 0 (and then they get removed) So my first question is, how are these particles getting warped all over the screen ? Relevant code : Particle class : class Particle { //Physics public Vector2 position = new Vector2(0,0); public float direction = 180; public float speed = 100; public float energy = 1; protected float startEnergy = 1; //Visual public Sprite sprite; public float rotation = 0; public float scale = 1; public byte alpha = 255; public BlendMode blendMode { get { return sprite.BlendMode; } set { sprite.BlendMode = value; } } public Particle() { } public virtual void Think(float frameTime) { if (energy - frameTime < 0) energy = 0; else energy -= frameTime; position += new Vector2((float)Math.Cos(MathHelper.DegToRad(direction)), (float)Math.Sin(MathHelper.DegToRad(direction))) * speed * frameTime; alpha = (byte)(255 * energy / startEnergy); sprite.Rotation = rotation; sprite.Position = position; sprite.Color = new Color(sprite.Color.R, sprite.Color.G, sprite.Color.B, alpha); } public virtual void Draw(float frameTime) { if (energy > 0) { World.camera.DrawSprite(sprite); } } // Basic particle implementation class BasicSprite : Particle { public BasicSprite(Sprite _sprite) { sprite = _sprite; } } Emitter : class Emitter { protected static Random rand = new Random(); protected List<Particle> particles = new List<Particle>(); public BaseEntity target = null; public Vector2 position = new Vector2(0, 0); public bool Active = true; public float timeAlive = 0; public int particleCount = 0; public int ParticlesPerSeccond { get { return (int)(1 / particleSpawnTime); } set { particleSpawnTime = 1 / (float)value; } } public float dieTime = float.MaxValue; float particleSpawnTime = 0.05f; float spawnTime = 0; public Emitter() { } public virtual void Think(float frametime) { spawnTime += frametime; if (dieTime != float.MaxValue) { timeAlive += frametime; if (timeAlive >= dieTime) Active = false; } if (Active) { if (target != null) position = target.Position; while (spawnTime > particleSpawnTime) { spawnTime -= particleSpawnTime; AddParticle(); particleCount++; } } for (int i = 0; i < particles.Count; i++) { particles[i].Think(frametime); if (particles[i].energy <= 0) { particles.Remove(particles[i]); // As soon as this is removed, it works particleCount--; } } } public virtual void AddParticle() { } public virtual void Draw(float frametime) { foreach (Particle particle in particles) { particle.Draw(frametime); } } } class BloodEmitter : Emitter { Image image; public BloodEmitter() { image = new Image(@"Content/Particles/TinyCircle.png"); image.CreateMaskFromColor(new Color(255, 0, 255, 255)); this.dieTime = 0.5f; this.ParticlesPerSeccond = 100; } public override void AddParticle() { Sprite sprite = new Sprite(image); sprite.Color = new Color((byte)(rand.NextDouble() * 255), (byte)(rand.NextDouble() * 255), (byte)(rand.NextDouble() * 255)); BasicSprite particle = new BasicSprite(sprite); particle.direction = (float)rand.NextDouble() * 360; particle.position = position; particle.blendMode = BlendMode.Alpha; particles.Add(particle); } } The seccond problem is the physics problem, for some reason I can get through the right bottom corner while jumping. I think this is coming from me switching animations but I thought I made it compensate for that. Relevant code : PhysicsEntity : class PhysicsEntity : BaseEntity { // Horizontal movement constants protected const float maxHorizontalSpeed = 1000; protected const float horizontalAcceleration = 15; protected const float horizontalDragAir = 0.95f; protected const float horizontalDragGround = 0.95f; // Vertical movement constants protected const float maxVerticalSpeed = 1000; protected const float verticalAcceleration = 20; // Everything needed for movement and correct animations protected float movement = 0; protected bool onGround = false; protected Vector2 Velocity = new Vector2(0, 0); protected float maxSpeed = 0; float lastThink = 0; float thinkTime = 1f/60f; public PhysicsEntity(Vector2 position, Sprite sprite) : base(position, sprite) { } public override void Draw(float frameTime) { base.Draw(frameTime); } public override void Think(float frameTime) { CalculateMovement(frameTime); base.Think(frameTime); } protected void CalculateMovement(float frameTime) { lastThink += frameTime; while (lastThink > thinkTime) { onGround = false; Velocity.X = MathHelper.Clamp(Velocity.X + horizontalAcceleration * movement, -maxHorizontalSpeed, maxHorizontalSpeed); if (onGround) Velocity.X *= horizontalDragGround; else Velocity.X *= horizontalDragAir; if (maxSpeed < Velocity.X) maxSpeed = Velocity.X; Velocity.Y = MathHelper.Clamp(Velocity.Y + verticalAcceleration, -maxVerticalSpeed, maxVerticalSpeed); lastThink -= thinkTime; DoCollisions(thinkTime); DoAnimations(thinkTime); } } public virtual void DoAnimations(float frameTime) { } public void DoCollisions(float frameTime) { Position.Y += Velocity.Y * frameTime; Vector2 tileCollision = GetTileCollision(); if (tileCollision.X != -1 || tileCollision.Y != -1) { Vector2 collisionDepth = CollisionRectangle.DepthIntersection( new Rectangle( tileCollision.X * World.tileEngine.TileWidth, tileCollision.Y * World.tileEngine.TileHeight, World.tileEngine.TileWidth, World.tileEngine.TileHeight ) ); Position.Y += collisionDepth.Y; if (collisionDepth.Y < 0) onGround = true; Velocity.Y = 0; } Position.X += Velocity.X * frameTime; tileCollision = GetTileCollision(); if (tileCollision.X != -1 || tileCollision.Y != -1) { Vector2 collisionDepth = CollisionRectangle.DepthIntersection( new Rectangle( tileCollision.X * World.tileEngine.TileWidth, tileCollision.Y * World.tileEngine.TileHeight, World.tileEngine.TileWidth, World.tileEngine.TileHeight ) ); Position.X += collisionDepth.X; Velocity.X = 0; } } public void DoCollisions(Vector2 difference) { CollisionRectangle.Y = Position.Y - difference.Y; CollisionRectangle.Height += difference.Y; Vector2 tileCollision = GetTileCollision(); if (tileCollision.X != -1 || tileCollision.Y != -1) { Vector2 collisionDepth = CollisionRectangle.DepthIntersection( new Rectangle( tileCollision.X * World.tileEngine.TileWidth, tileCollision.Y * World.tileEngine.TileHeight, World.tileEngine.TileWidth, World.tileEngine.TileHeight ) ); Position.Y += collisionDepth.Y; if (collisionDepth.Y < 0) onGround = true; Velocity.Y = 0; } CollisionRectangle.X = Position.X - difference.X; CollisionRectangle.Width += difference.X; tileCollision = GetTileCollision(); if (tileCollision.X != -1 || tileCollision.Y != -1) { Vector2 collisionDepth = CollisionRectangle.DepthIntersection( new Rectangle( tileCollision.X * World.tileEngine.TileWidth, tileCollision.Y * World.tileEngine.TileHeight, World.tileEngine.TileWidth, World.tileEngine.TileHeight ) ); Position.X += collisionDepth.X; Velocity.X = 0; } } Vector2 GetTileCollision() { int topLeftTileX = (int)(CollisionRectangle.TopLeft.X / World.tileEngine.TileWidth); int topLeftTileY = (int)(CollisionRectangle.TopLeft.Y / World.tileEngine.TileHeight); int BottomRightTileX = (int)(CollisionRectangle.DownRight.X / World.tileEngine.TileWidth); int BottomRightTileY = (int)(CollisionRectangle.DownRight.Y / World.tileEngine.TileHeight); if (CollisionRectangle.DownRight.Y % World.tileEngine.TileHeight == 0) // If your exactly against the tile don't count that as being inside the tile BottomRightTileY -= 1; if (CollisionRectangle.DownRight.X % World.tileEngine.TileWidth == 0) // If your exactly against the tile don't count that as being inside the tile BottomRightTileX -= 1; for (int i = topLeftTileX; i <= BottomRightTileX; i++) { for (int j = topLeftTileY; j <= BottomRightTileY; j++) { if (World.tileEngine.TileIsSolid(i, j)) { return new Vector2(i, j); } } } return new Vector2(-1, -1); } } Player : enum State { Standing, Running, Jumping, Falling, Sliding, WallSlide } class Player : PhysicsEntity { private State state { get { return currentState; } set { if (currentState != value) { currentState = value; animationChanged = true; } } } private State currentState = State.Standing; private BasicEmitter basicEmitter = new BasicEmitter(); public bool flipped; public bool animationChanged = false; protected const float jumpPower = 600; AnimationManager animationManager; Rectangle DrawRectangle; public override Rectangle CollisionRectangle { get { return new Rectangle( Position.X - DrawRectangle.Width / 2f, Position.Y - DrawRectangle.Height / 2f, DrawRectangle.Width, DrawRectangle.Height ); } } public Player(Vector2 position, Sprite sprite) : base(position, sprite) { // Only posted the relevant bit DrawRectangle = animationManager.currentAnimation.drawingRectangle; } public override void Draw(float frameTime) { World.camera.DrawSprite( Sprite, Position + new Vector2(DrawRectangle.X, DrawRectangle.Y), animationManager.currentAnimation.drawingRectangle ); } public override void Think(float frameTime) { //I only posted the relevant stuff if (animationChanged) { // if the animation has changed make sure we compensate for the change in with and height animationChanged = false; DoCollisions(animationManager.getSizeDifference()); } DoCustomMovement(); base.Think(frameTime); if (!onGround && Velocity.Y > 0) { state = State.Falling; } } void DoCustomMovement() { if (onGround) { if (World.renderWindow.Input.IsKeyDown(KeyCode.W)) { Velocity.Y = -jumpPower; state = State.Jumping; } } } public override void DoAnimations(float frameTime) { string stateName = Enum.GetName(typeof(State), state); if (!animationManager.currentAnimationIs(stateName)) { animationManager.PlayAnimation(stateName); } animationManager.Think(frameTime); DrawRectangle = animationManager.currentAnimation.drawingRectangle; Sprite.Center = new Vector2( DrawRectangle.X + DrawRectangle.Width / 2, DrawRectangle.Y + DrawRectangle.Height / 2 ); Sprite.FlipX(flipped); } So why am I warping through walls ? I have given this some thought but I just can't seem to find out why this is happening. Full source if needed : source : http://www.mediafire.com/?rc7ddo09gnr68zd (download link)

    Read the article

  • Getting the relational table data into XML recursively

    - by Tom
    I have levels of tables (Level1, Level2, Level3, ...) For simplicity, we'll say I have 3 levels. The rows in the higher level tables are parents of lower level table rows. The relationship does not skip levels however. E.g. Row1Level1 is parent of Row3Level2, Row2Level2 is parent of Row4Level3. Level(n-1)'s parent is always be in Level(n). Given these tables with data, I need to come up with a recursive function that generates an XML file to represent the relationship and the data. E.g. <data> <level levelid = 1 rowid=1> <level levelid = 2 rowid=3 /> </level> <level levelid = 2 rowid=2> <level levelid = 3 rowid=4 /> </level> </data> I would like help with coming up with a pseudo-code for this setup. This is what I have so far: XElement GetXMLData(Table table, string identifier, XElement data) { XElement xmlData = data; if (table != null) { foreach (row in the table) { // Get subordinate table Table subordinateTable = GetSubordinateTable(table); // Get the XML Data for the children of current row xmlData += GetXMLData(subordinateTable, row.Identifier, xmlData); } } return xmlData; }

    Read the article

  • Grading an algorithm: Readability vs. Compactness

    - by amiregelz
    Consider the following question in a test \ interview: Implement the strcpy() function in C: void strcpy(char *destination, char *source); The strcpy function copies the C string pointed by source into the array pointed by destination, including the terminating null character. Assume that the size of the array pointed by destination is long enough to contain the same C string as source, and does not overlap in memory with source. Say you were the tester, how would you grade the following answers to this question? 1) void strcpy(char *destination, char *source) { while (*source != '\0') { *destination = *source; source++; destionation++; } *destionation = *source; } 2) void strcpy(char *destination, char *source) { while (*(destination++) = *(source++)) ; } The first implementation is straightforward - it is readable and programmer-friendly. The second implementation is shorter (one line of code) but less programmer-friendly; it's not so easy to understand the way this code is working, and if you're not familiar with the priorities in this code then it's a problem. I'm wondering if the first answer would show more complexity and more advanced thinking, in the tester's eyes, even though both algorithms behave the same, and although code readability is considered to be more important than code compactness. It seems to me that since making an algorithm this compact is more difficult to implement, it will show a higher level of thinking as an answer in a test. However, it is also possible that a tester would consider the first answer not good because it's not readable. I would also like to mention that this is not specific to this example, but general for code readability vs. compactness when implementing an algorithm, specifically in tests \ interviews.

    Read the article

  • Cannot see boot options after editing grub background

    - by cipricus
    After solving this problem I managed to get myself into truble again out of nothing by trying to change the display of the dual boot option page in Boot Customizer. I have changed the background, the fonts size (I have increased them) and font style (I have chosen UnDotum). But Boot Customizer gave me an error (I mean a message that the application was closed unexpectedly or smth). I have restarted BootCustomizer and the settings were there. When I rebooted, instead of the normal boot options list, just the background image that I had selected and nothing else. I used Boot Repair to repair grub, it says it did it successfully, but I still get the background image when I try to boot. Any ideas? (Could it be the matter that I chose UnDotum font style? That was installed in Lubuntu - but how could it be accessible in displaying boot options?) The contents of etc/default/grub are: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" I have tried to modify etc/default/grub: GRUB_HIDDEN_TIMEOUT=0 to 10 GRUB_HIDDEN_TIMEOUT_QUIET=true to false and GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" to "" but it doesn't help Also, using Shift doesn't make the list visible. I am looking for something like a command that would reset grub options to default. [When trying to reinstall grub i get to this window in term:

    Read the article

  • (LWJGL) Pixel Unpack Buffer Object is Disabled? (glTextImage2D)

    - by OstlerDev
    I am trying to create a render target for my game so that I can re-render at a different screen size. But I am receiving the following error: Exception in thread "main" org.lwjgl.opengl.OpenGLException: Cannot use offsets when Pixel Unpack Buffer Object is disabled Here is the source code for my Render method: // clear screen GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT); // Start FBO Rendering Code // The framebuffer, which regroups 0, 1, or more textures, and 0 or 1 depth buffer. int FramebufferName = GL30.glGenFramebuffers(); GL30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, FramebufferName); // The texture we're going to render to int renderedTexture = glGenTextures(); // "Bind" the newly created texture : all future texture functions will modify this texture glBindTexture(GL_TEXTURE_2D, renderedTexture); // Give an empty image to OpenGL ( the last "0" ) glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, 1024, 768, 0,GL_RGB, GL_UNSIGNED_BYTE, 0); // Poor filtering. Needed ! glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // Set "renderedTexture" as our colour attachement #0 GL32.glFramebufferTexture(GL30.GL_FRAMEBUFFER, GL30.GL_COLOR_ATTACHMENT0, renderedTexture, 0); // Set the list of draw buffers. IntBuffer drawBuffer = BufferUtils.createIntBuffer(20 * 20); GL20.glDrawBuffers(drawBuffer); // Always check that our framebuffer is ok if(GL30.glCheckFramebufferStatus(GL30.GL_FRAMEBUFFER) != GL30.GL_FRAMEBUFFER_COMPLETE){ System.out.println("Framebuffer was not created successfully! Exiting!"); return; } // Resets the current viewport GL11.glViewport(0, 0, scaleWidth*scale, scaleHeight*scale); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); // let subsystem paint if (callback != null) { callback.frameRendering(); } // update window contents Display.update(); It is crashing on this line: glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, 1024, 768, 0,GL_RGB, GL_UNSIGNED_BYTE, 0); I am not really sure why it is crashing and looking around I have not been able to find out why. Any help or insight would be greatly welcome.

    Read the article

  • Nvidia dual monitor configuration gets lost every time I reboot

    - by sunwukung
    I've recently updated (well, borked then completely reinstalled) to 12.04. I'm running a dual monitor setup, with a Dell U2410 / Dell 2007WFP combination on an HP Elite Book 8560W. The graphics card is an NVIDIA GF108 [Quadro 1000M]. My problem is as follows. I can get the dual monitor setup working fine, but every time I reboot, my machine appears to lose the settings (specifically, the U2410 is disabled, the mouse pointer is locked in the launcher). I have to restate the settings after every launch. I've tried running nvidia-settings as sudo, I've save the changes to my xorg.conf file (see below) but nothing seems to be sticking. Has anyone had similair issues, or know of a fix? Conf file follows: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@allspice) Fri Mar 30 15:25:24 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 2007WFP" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro 1000M" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "DFP-1" Option "metamodes" "CRT: 1680x1050 +1920+0, DFP-1: 1920x1200 +0+0; CRT: nvidia-auto-select +0+0, DFP-1: NULL" SubSection "Display" Depth 24 EndSubSection EndSection The error message I'm getting is this: none of the selected modes were compatible with the possible modes: Trying modes for CRTC 642: CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 0) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 0) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 0) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 1) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 1) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 1)

    Read the article

  • Save password in WCF adapter binding file

    - by Edmund Zhao
    Binding file for WCF Adapter doesn't save the password no matter it is generated by "Add Generated Items..." wizard in Visual Studio or "Export Bindings..." in administration console. It is by design dut to the consideration of security, but it is very annoying especially when you import bindings which contain multiple WCF send ports. The way to aviod retyping password everytime after an import is to edit the binding file before import. Here is what needs to be done. 1. Find the following string:     &lt;Password vt="1" /&gt; "&lt;" means "<", "&gt;" means ">", "vt" means "Variable Type", variable type 1 is "NULL", so the above string can be translated to "<Password/>" 2. Replace it with:     &lt;Password vt="8"&gt;MyPassword&lt;/Password&gt;    variable type 8 is "string", the above string can be transalted to "<Password>MyPassword</Password>"   Binding file uses a lot of character entity references for XML character encoding purpose. For a list of the special charactor entiy references, you can check from here. ...Edmund Zhao

    Read the article

  • Finding the file that is on a bad block on a HFS+ volume (debugfs for HFS+)

    - by Blair Zajac
    I have a drive in our iMac that has bad blocks, as booting from an Ubuntu 11.10 live CD and using ddrescue -f /dev/sda /dev/null finds them. I'd like to get the drive to remap them by writing to the blocks, say using hdparm --write-sector, but I don't want to do this without knowing what's in those blocks and finding the file that owns them, so I can restore the file from another source. I found fileXray but don't feel like spending $79 to map a block to a file and hfsdebug has been taken offline. Are there suggestions on a tool or technique to use? I looked at all the Ubuntu HFS+ packages to see if they could provide this info but nothing jumped out at me. BTW, I used Disk Utility to erase the empty space, but it didn't get any of the bad blocks to be remapped, according to smartctl -A.

    Read the article

  • Emacs doesn't load gui.

    - by D Connors
    Hi, whenever I run emacs or emacs23 on terminal I just get the following output: ** (emacs:2620): CRITICAL **: menu_proxy_module_load: assertion `dbusproxy != NULL' failed And the gui doesn't load, and emacs' window never opens. The emacs process doesn't actually crash (the terminal stays busy, and I can see the emacs23 process running with ps -e). I've tried running it with the -D --debug-init arguments, but the same thing happens and the output is exactly the same. However, if I run emacs -nw it successfully runs emacs in terminal mode as if nothing were wrong. Strangely, this problem only started happening the second time I ran emacs today. The first time it worked perfectly fine. Since then, I've tried rebooting and I've tried purging the emacs installation, to no success. I haven't installed any new packages today, but I might have upgraded some, could that be the reason? Is there a way to find out which packages were installed/upgraded today? Thanks I'm running Ubuntu Lucid

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >