Search Results

Search found 9963 results on 399 pages for 'special commands'.

Page 100/399 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Problem when reading input in C

    - by gcx
    I've made a Linked List. Its elements keep both previous and next items' address. It gets commands from an input file. It detects the command and uses the following statement as a parameter. (text: add_to_front john - means: add_to_front(john)) Code: http://pastebin.com/KcAm1y3L When I try to give the commands from an input file it gives me same output over and over. However, if I write inputs in main() manually, it works. For ex input file: add_to_front john add_to_back jane add_to_back jane print (unfortunately) the output is: >add_to_front john >add_to_back jane >add_to_back jane >print jane jane jane Although, if I write add_to_front(john); add_to_back(jane); add_to_back(jane); print(); instead of this command check: while (scanf("%s",command)!=EOF) { if (strcmp(command,"add_to_front")==0) { gets(parameter); add_to_front(parameter); } else if (strcmp(command,"add_to_back")==0) { gets(parameter); add_to_back(parameter); } else if (strcmp(command,"remove_from_back")==0) remove_from_back(parameter); ... printf(" HUH?\n"); } } in main() it gives the correct output. I know it's a lot to ask but this thing is bothering me for 2 days. What do you think i'm doing wrong?

    Read the article

  • Need help with WCF design

    - by Jason
    I have been tasked with creating a set of web services. We are a Microsoft shop, so I will be using WCF for this project. There is an interesting design consideration that I haven't been able to figure out a solution for yet. I'll try to explain it with an example: My WCF service exposes a method named Foo(). 10 different users call Foo() at roughly the same time. I have 5 special resources called R1, R2, R3, R4, and R5. We don't really need to know what the resource is, other than the fact that a particular resource can only be in use by one caller at a time. Foo() is responsible to performing an action using one of these special resources. So, in a round-robin fashion, Foo() needs to find a resource that is not in use. If no resources are available, it must wait for one to be freed up. At first, this seems like an easy task. I could maybe create a singleton that keeps track of which resources are currently in use. The big problem is the fact that I need this solution to be viable in a web farm scenario. I'm sure there is a good solution to this problem, but I've just never run across this scenario before. I need some sort of resource tracker / provider that can be shared between multiple WCF hosts. Any ideas from the architects out there would be greatly appreciated!

    Read the article

  • Best way to split a string by word (SQL Batch separator)

    - by Paul Kohler
    I have a class I use to "split" a string of SQL commands by a batch separator - e.g. "GO" - into a list of SQL commands that are run in turn etc. ... private static IEnumerable<string> SplitByBatchIndecator(string script, string batchIndicator) { string pattern = string.Concat("^\\s*", batchIndicator, "\\s*$"); RegexOptions options = RegexOptions.Compiled | RegexOptions.IgnoreCase | RegexOptions.Multiline; foreach (string batch in Regex.Split(script, pattern, options)) { yield return batch.Trim(); } } My current implementation uses a Regex with yield but I am not sure if it's the "best" way. It should be quick It should handle large strings (I have some scripts that are 10mb in size for example) The hardest part (that the above code currently does not do) is to take quoted text into account Currently the following SQL will incorrectly get split: var batch = QueryBatch.Parse(@"-- issue... insert into table (name, desc) values('foo', 'if the go is on a line by itself we have a problem...')"); Assert.That(batch.Queries.Count, Is.EqualTo(1), "This fails for now..."); I have thought about a token based parser that tracks the state of the open closed quotes but am not sure if Regex will do it. Any ideas!?

    Read the article

  • MVC: model of type Nullable<T>

    - by Fyodor Soikin
    I have a partial view that inherits from ViewUserControl<Guid?> - i.e. it's model is of type Nullable<Guid>. Very simple view, nothing special, but that's not the point. Somewhere else, I do Html.RenderPartial( "MyView", someGuid ), where someGuid is of type Nullable<Guid>. Everything's perfectly legal, should work OK, right? But here's the gotcha: the second argument of Html.RenderPartial is of type object, and therefore, Nullable<Guid> being a value type, it must be boxed. But nullable types are somehow special in the CLR, so that when you box one of those, you actually get either a boxed value of type T (Nullable's argument), or a null if the nullable didn't have a value to begin with. And that last case is actually interesting. Turns out, sometimes, I do have a situation when someGuid.HasValue == false. And in those cases, I effectively get a call Html.RenderPartial( "MyView", null ). And what does the HtmlHelper do when the model is null? Believe it or not, it just goes ahead and takes the parent view's model. Regardless of it's type. So, naturally, in those cases, I get an exception saying: "The model item passed into the dictionary is of type 'Parent.View.Model.Type', but this dictionary requires a model item of type 'System.Guid?'" So the question is: how do I make MVC correctly pass new Nullable<Guid> { HasValue = false } instead of trying to grab the parent's model? Note: I did consider wrapping my Guid? in an object of another type, specifically created for this occasion, but this seems completely ridiculous. Don't want to do that as long as there's another way. Note 2: now that I've wrote all this, I've realized that the question may be reduced to how to pass a null for model without ending up with parent's model?

    Read the article

  • Card emulation via software NFC

    - by user85030
    After reading a lot of questions, i decided to post this one. I read that stock version of android does not support API's for card emulation. Also, we cannot write custom applications to secure element embedded in nfc controllers due to keys managed by google/samsung. I need to emulate a card (mifare or desfire etc). The option i can see is doing it via software. I have a ACR122U reader and i've tested that NFC P2P mode works fine with the Nexus-S that i have. 1) I came across a site that said that nexus s's NFC controller (pn532) can emulate a mifare 4k card. If this is true, can i write/read apdu commands to this emulated card? (Probably if i use a modded rom like cyanogenmod) 2) Can i write a android application that reads apdu commands sent from the reader and generate appropriate responses (if not fully, then upto some extent only). To do so, i searched that we need to patch nexus s with cynagenmod. Has someone tried emulating card via this method? I see that this is possible since we have products from access control companies offering mobile applications via which one can open doors e.g. http://www.assaabloy.com/en/com/Products/seos-mobile-access/

    Read the article

  • Launch .jar files with command line arguments (but with no console window)

    - by Virat Kadaru
    I have to do a demo of an application, the application has a server.jar and client.jar. Both have command line arguments and are executable. I need to launch two instances of server.jar and two instances of client.jar. I thought that using a batch file was the way to go, but, the batch file executes the first command (i.e. server.bat [argument1] [argument2]) and does not do anything else unless I close the first instance, in which case it then runs the 2nd command. And also the I do not want a blank console window to open (or be minimized) What I really need is a batch script that will just launch these apps without any console windows and launch all instances that I need. Thanks in Advance! EDIT: javaw: works if I type the command into the console window individually. If I put the same in the batch file, it will behave as before. Console window opens, one instance starts (whichever was first) and it does not proceed further unless I close the application in which case it runs the 2nd command. I want it to run all commands silently SOLUTION: Found the solution, below is the contents of my batch file @echo off start /B server.jar [arg1] [arg2] start /B server.jar [arg3] [arg4] @echo on this opens, runs all the commands and closes the window, does not wait for the command to finish.

    Read the article

  • How to manipulate a header and then continue with it in C#?

    - by Simon Linder
    Hi all, I want to replace an old ISAPI filter that ran on IIS6. This filter checks if the request is of a special kind, then manipulates the header and continues with the request. Two headers are added in the manipulating method that I need for calling another special ISAPI module. So I have ISAPI C++ code like: DWORD OnPreProc(HTTP_FILTER_CONTEXT *pfc, HTTP_FILTER_PREPROC_HEADERS *pHeaders) { if (ManipulateHeaderInSomeWay(pfc, pHeaders)) { return SF_STATUS_REQ_NEXT_NOTIFICATION; } return SF_STATUS_REQ_FINISHED; } I now want to rewrite this ISAPI filter as a managed module for the IIS7. So I have something like this: private void OnMapRequestHandler(HttpContext context) { ManipulateHeaderInSomeWay(context); } And now what? The request seems not to do what it should? I already wrote an IIS7 native module that implements the same method. But this method has a return value with which I can tell what to do next: REQUEST_NOTIFICATION_STATUS CMyModule::OnMapRequestHandler(IN IHttpContext *pHttpContext, OUT IMapHandlerProvider *pProvider) { if (DoSomething(pHttpContext)) { return RQ_NOTIFICATION_CONTINUE; } return RQ_NOTIFICATION_FINISH_REQUEST; } So is there a way to send my manipulated context again?

    Read the article

  • One-sided rounded buttons in Silverlight

    - by xarzu
    I want to make a collection of buttons in silverlight. They are in a collection that goes from left to right and the buttons are lined up so that they are touching on the left and right sides. Here is the rub: The collection has rounded corners but the buttons in between the end buttons in the collection do not have rounded ends. So basically, for the buttons on the far left and right side of the collection, they have to be somewhat special because they have to have one flat vertical side and one rounded side. Is this possible to do in silverlight without resorting to making a special bitmap for the end buttons? One idea I have is somehow declare a canvas with a bitmap background and then have overlapping ellipse and rectangle <Canvas Height="100" HorizontalAlignment="Left" Margin="189,381,0,0" VerticalAlignment="Top" Width="200" Background="Black"> <Rectangle Fill="#FFF4F4F5" HorizontalAlignment="Left" Stroke="Black" Width="58" Height="61" Canvas.Left="7" Canvas.Top="16" /> <Ellipse Fill="#FFF4F4F5" HorizontalAlignment="Left" Stroke="White" Width="65" StrokeThickness="0" Height="59" Canvas.Left="31" Canvas.Top="17" /> </Canvas>

    Read the article

  • Cucumber-rails on jruby installs gem into my apps root directory with bundler

    - by brad
    Just installed cucumber 0.7.2 and cucumber-rails 0.3.1 with jruby-1.4.0 on OSX. When I run a bundle install, it places a cucumber-rails directory in my main app with all of the gem code/dependencies. First off, this is definitely not what I want and I'm not sure why this happens for cucumber-rails only. Second, if I delete this folder and just manually install cucumber-rails, when I run script/generate feature blah I get /Users/bradrobertson/.rvm/rubies/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/source_index.rb:344:in `refresh!': source index not created from disk (RuntimeError) from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/vendor_gem_source_index.rb:34:in `refresh!' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/vendor_gem_source_index.rb:29:in `initialize' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/gem_dependency.rb:21:in `new' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/gem_dependency.rb:21:in `add_frozen_gem_path' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:298:in `add_gem_load_paths' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:132:in `process' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:113:in `run' from /Users/bradrobertson/Repos/app/source/trunk/config/environment.rb:13 from /Users/bradrobertson/Repos/app/source/trunk/config/environment.rb:1:in `require' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/commands/generate.rb:1 from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/commands/generate.rb:3:in `require' from script/generate:3 Similarly running rake cucumber I get rake aborted! source index not created from disk So something obviously doesn't work. If I add that cucumber-rails directory back in, then my rake cucumber actually runs. Can someone tell me why it would need to install the gem right in my rails app? I've never seen this before. setup jruby-1.4.0 cucumber-0.7.2 cucumber-rails 0.3.1 bundler 0.9.23 webrat 0.7.1

    Read the article

  • How to create refresh statements for TableAdapter objects in Visual Studio?

    - by Mark Wilkins
    I am working on developing an ADO.NET data provider and an associated DDEX provider. I am unable to convince the Visual Studio TableAdapater Configuration Wizard to generate SQL statements to refresh the data table after inserts and updates. It generates the insert and delete statements but will not produce the select statements to do the refresh. The functionality referred to can be accessed by dropping a table from the Server Explorer (inside Visual Studio) onto a DataSet (e.g., DataSet1.xsd). It creates a TableAdapter object and configures SELECT, UPDATE, DELETE, and INSERT statements. If you right click on the TableAdapter object, the context menu has a “Configure” option that starts the “TableAdapter Configuration Wizard”. The first dialog of that wizard has an Advanced Options button, which leads to an option titled “Refresh the data table”. When used with SQL Server tables, that option causes a statement of the form “select field1, field2, …” to be added on to the end of the commands for the TableAdapter’s InsertCommand and UpdateCommand. Do you have any idea what type property or interface might need to be exposed from the DDEX provider (or maybe the ADO.NET data provider) in order to make Visual Studio add those refresh statements to the update/insert commands? The MSDN documentation for the Advanced SQL Generation Options Dialog Box has a note stating, “Refreshing the data table is only supported on databases that support batching of SQL statements.” This seems to imply that a .NET data provider might need to expose some property indicating such behavior is supported. But I cannot find it. Any ideas?

    Read the article

  • Limiting ssh user account only to access his home directory!

    - by EBAGHAKI
    By reading some tutorials online I used these commands: Make a local group: net localgroup CopsshUsers /ADD Deny access to this group at top level: cacls c:\ /c /e /t /d CopsshUsers Open access to the copSSH installation directory: cacls copssh-inst-dir /c /e /t /r CopsshUsers Add Copssh user to the group above: net localgroup CopsshUsers mysshuser /add simply put these commands will try to create a usergroup that has no permission on your computer and it only have access to the copSSH Installation directory. This is not true, since you cannot change the permission on your windows directory, the third command won't remove access to windows folder (it says access denied on his log). Somehow I achieved that by taking ownership of Windows folder and then i execute the third command so CopsshUsers has no permissions on windows folder from now on. Now i tried to SSH to the server and it simply can't login! this is kind of funny because with permission on windows directory you can login and without it you can't!! So if you CAN SSH to the server somehow you know that you have access to the windows directory! (Is this really true??) Simple task: Limiting ssh user account only to access his home directory on WINDOWS and nothing else! Guys please help!

    Read the article

  • Extracting shell script from parameterised Hudson job

    - by Jonik
    I have a parameterised Hudson job, used for some AWS deployment stuff, which in one build step runs certain shell commands. However, that script has become sufficiently complicated that I want to "extract" it from Hudson to a separate script file, so that it can easily be versioned properly. The Hudson job would then simply update from VCS and execute the external script file. My main question is about passing parameters to the script. I have a Hudson parameter named AMI_ID and a few others. The script references those params as if they were environment variables: echo "Using AMI $AMI_ID and type $TYPE" Now, this works fine inside Hudson, but not if Hudson calls an external script. Could I somehow make Hudson set the params as environment variables so that I don't need to change the script? Or is my best option to alter the script to take command line parameters (and possibly assign those to named variables for readability: ami_id=$1; type=$2; ... )? I tried something like this but the script doesn't get correctly replaced values: export AMI_ID=$AMI_ID export TYPE=$TYPE external-script.sh # this tries to use e.g. $AMI_ID Bonus question: when the script is inside Hudson, the "console output" will contain both the executed commands and their output. This is extremely useful for debugging when something goes wrong with a build! For example, here the line starting with "+" is part of the script and the following line its output: + ec2-associate-address -K pk.pem -C cert.pem 77.125.116.139 -i i-aa3487fd ADDRESS 77.125.116.139 i-aa3487fd When calling an external script, Hudson output will only contain the latter line, making debugging harder. I could cat the script file to stdout before running it, but that's not optimal either. In effect, I'd like a kind of DOS-style "echo on" for the script which I'm calling from Hudson - anyone know a trick to achieve this?

    Read the article

  • Is it wise to use temporary tables?

    - by Industrial
    Hi guys, We have a mySQL database table for products. We are utilizing a cache layer to reduce database load, but we think that it's a good idea to minimize the actual data needed to be stored in the cache layer to speed up the application further. All the products in the database, that is visible to visitors have a price attached to them: The prices are stored in a different table, called prices . There are multiple price categories depending on which discount level each visitor (customer) applies to. From time to time, there are campaigns which means that a special price for each product is available. The special prices are stored in a table called specials. Is it a bad to make a temp table that binds the tables together? It would only have the neccessary information and would ofcourse be cached. -------------|-------------|------------ | productId | hasPrice | hasSpecial -------------|-------------|------------ 1 | 1 | 0 2 | 1 | 1 By doing such, it would be super easy to know if the specific product really has a price, without having to iterate through the complete prices or specials table each time a product should be listed or presented. Are temp tables a common thing for web applications or is it just bad design?

    Read the article

  • Asp.net renders string with wrong encoding, but PHP doesn't (MySQL)

    - by citronas
    I took over some old php application with MySQL as database. Inside the database, there are tables including content with localized strings (therefore containing special chars) Currently there is a PHP application accessing that database. My job is to create an ASP.net (C# codebehind) application that accesses that strings as well. That works, as far as encoding goes. If I try to access these strings, I do get a kind of encoding problem, like 'Ändern' and 'Prüfzeichen', but only in the ASP.net application. The PHP app sets utf-8 as charset and the strings are perfectly rendered. In the ASP.net application it's gibberish, regardless of the page encoding. In the MySQL database, the charset for the specified table 'translations' is set to 'latin --cp1252 West European' and collation to 'latin_swedish_ci'. I can't seem to figure out what PHP apparently does, and ASP.net does not. I traced the php code and could not find any sign of special encoding while getting a string from the database. The question is, how can I ensure correct encoding inside the ASP.net application without modifying the database, because big changes at the php code are not possible? Does anybody have a clue?

    Read the article

  • Rolling back file moves, folder deletes and mysql queries

    - by Workoholic
    This has been bugging me all day and there is no end in sight. When the user of my php application adds a new update and something goes wrong, I need to be able to undo a complex batch of mixed commands. They can be mysql update and insert queries, file deletes and folder renaming and creations. I can track the status of all insert commands and undo them if an error is thrown. But how do I do this with the update statements? Is there a smart way (some design pattern?) to keep track of such changes both in the file structure and the database? My database tables are MyISAM. It would be easy to just convert everything to InnoDB, so that I can use transactions. That way I would only have to deal with the file and folder operations. Unfortunately, I cannot assume that all clients have InnoDB support. It would also require me to convert many tables in my database to InnoDB, which I am hesitant to do.

    Read the article

  • Python subprocess: 64 bit windows server PIPE doesn't exist :(

    - by Spaceman1861
    I have a GUI that launches selected python scripts and runs it in cmd next to the gui window. I am able to get my launcher to work on my (windows xp 32 bit) laptop but when I upload it to the server(64bit windows iss7) I am running into some issues. The script runs, to my knowledge but spits back no information into the cmd window. My script is a bit of a Frankenstein that I have hacked and slashed together to get it to work I am fairly certain that this is a very bad example of the subprocess module. Just wondering if i could get a hand :). My question is how do i have to alter my code to work on a 64bit windows server. :) from Tkinter import * import pickle,subprocess,errno,time,sys,os PIPE = subprocess.PIPE if subprocess.mswindows: from win32file import ReadFile, WriteFile from win32pipe import PeekNamedPipe import msvcrt else: import select import fcntl def recv_some(p, t=.1, e=1, tr=5, stderr=0): if tr < 1: tr = 1 x = time.time()+t y = [] r = '' pr = p.recv if stderr: pr = p.recv_err while time.time() < x or r: r = pr() if r is None: if e: raise Exception(message) else: break elif r: y.append(r) else: time.sleep(max((x-time.time())/tr, 0)) return ''.join(y) def send_all(p, data): while len(data): sent = p.send(data) if sent is None: raise Exception(message) data = buffer(data, sent) The code above isn't mine def Run(): print filebox.get(0) location = filebox.get(0) location = location.__str__().replace(listbox.get(ANCHOR).__str__(),"") theTime = time.asctime(time.localtime(time.time())) lastbox.delete(0, END) lastbox.insert(END,theTime) for line in CookieCont: if listbox.get(ANCHOR) in line and len(line) > 4: line[4] = theTime else: "Fill In the rip Details to record the time" if __name__ == '__main__': if sys.platform == 'win32' or sys.platform == 'win64': shell, commands, tail = ('cmd', ('cd "'+location+'"',listbox.get(ANCHOR).__str__()), '\r\n') else: return "Please use contact admin" a = Popen(shell, stdin=PIPE, stdout=PIPE) print recv_some(a) for cmd in commands: send_all(a, cmd + tail) print recv_some(a) send_all(a, 'exit' + tail) print recv_some(a, e=0) The Code above is mine :)

    Read the article

  • how could application installations/configurations be easier in linux? [closed]

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • Why does my perl server stop working when i press 'enter'?

    - by David
    I have created a server in perl that sends messages or commands to the client. I can send commands just fine, but when i am being prompted for the command on my server i have created, if i press 'enter', the server messes up. Why is this happening? Here is part of my code: print "\nConnection recieved from IP address $peer_address on port $peer_port "; $closed_message = "\n\tTerminated client session..."; while (1) { print "\nCommand: "; $send_data = <STDIN>; chop($send_data); if ($send_data eq 'e' or $send_data eq 'E' or $send_data eq ' E' or $send_data eq ' E ' or $send_data eq 'E ' or $send_data eq ' e' or $send_data eq ' e ' or $send_data eq 'e') { $client_socket->send ($send_data); close $client_socket; print "$closed_message\n"; &options; } else { $client_socket->send($send_data); } $client_socket->recv($recieved_data,8000); print "\nRecieved: $recieved_data"; } }

    Read the article

  • When software problems reported are not really software problems

    - by AndyUK
    Hi Apologies if this has already been covered or you think it really belongs on wiki. I am a software developer at a company that manufactures microarray printing machines for the biosciences industry. I am primarily involved in interfacing with various bits of hardware (pneumatics, hydraulics, stepper motors, sensors etc) via GUI development in C++ to aspirate and print samples onto microarray slides. On joining the company I noticed that whenever there was a hardware-related problem this would cause the whole setup to freeze, with nobody being any the wiser as to what the specific problem was - hardware / software / misuse etc. Since then I have improved things somewhat by introducing software timeouts and exception handling to better identify and deal with any hardware-related problems that arise eg PLC commands not successfully completed, inappropriate FPGA response commands, and various other deadlock type conditions etc. In addition, the software will now log a summary of the specific problem, inform the user and exit the thread gracefully. This software is not embedded, just interfacing using serial ports. In spite of what has been achieved, non-software guys still do not fully appreciate that in these cases, the 'software' problem they are reporting to me is not really a software problem, rather the software is reporting a problem, but not causing it. Don't get me wrong, there is nothing I enjoy more than to come down on software bugs like a ton of bricks, and looking at ways of improving robustness in any way. I know the system well enough now that I almost have a sixth sense for these things. No matter how many times I try to explain this point to people, it does not really penetrate. They still report what are essentially hardware problems (which eventually get fixed) as software ones. I would like to hear from any others that have endured similar finger-pointing experiences and what methods they used to deal with them.

    Read the article

  • [LaTeX] breaking an environment across pages - the smart way

    - by Flavius
    Hi I am using the exercise package to display exercises in a book. I have redefined some commands like this, which basically adds some space, a pencil, and two hrule's before and after the exercise: \renewcommand{\ExerciseHeader}{\vskip 1em\hrule\vskip 1em\centerline{\textbf{\large\smallpencil \ExerciseHeaderNB\ExerciseHeaderTitle% \ExerciseHeaderDifficulty\ExerciseHeaderOrigin\medskip}}} \makeatletter\def\endExerciseEnv{\termineliste{1}\@EndExeBox\vskip .5em\hrule\vskip 1em}\makeatother Now this works, but there's a small problem: There are situations where only the \hrule ends up being at the bottom of a page, and the rest of the exercise goes on the next page. There is also the opposite behavior: the entire exercise is on one page, except the \hrule in "endExerciseEnv", which is flushed to the next page. My question is: How to force the top hrule come either together with the header of the exercise (caption, title, whatever not) and at least the first two paragraphs (or \ExePath and a paragraph, or anything like that, but there must be at last "two things", so it doesn't look ugly), OR be flushed altogether, with the entire exercise? Similar question for the bottom hrule: How to force it have at least two items in front of it on the visible page where the hrule itself goes to? Any LaTeX guru who knows that? Addendum I have asked in the past LaTeX questions like this and I've got answers which required me to do stuff manually, like "insert a \vskip here and there" or such. Let me be clear: This is a book, there's lot of exercises, and I NEED it be done "automatically", by going the proper way of redeclaring commands & co.

    Read the article

  • Advantages of Using Linux as primary developer desktop

    - by Nick N
    I want to get some input on some of the advantages of why developers should and need to use Linux as their primary development desktop on a daily basic as opposed to using Windows. This is particulary helpful when your Dev, QA, and Production environments are Linux. The current analogy that I keep coming back to is. If I build my demo car as a Ford Escort, but my project car is a Ford Mustang, it doesn't make sense at all. I'm currently at an IT department that allows dual boot with Windows and Linux, but some run Linux while the vast majority use Windows. Here's several advantages that I've came up with since using Linux as a primary desktop. Same Exact operating system as Dev, QA, and Production Same Scripts (.sh) instead of maintaining (.bat and *.sh). Somewhat mitigated by using cygwin, but still a bit different. Team learns simple commands such as: cd, ls, cat, top Team learns Advanced commands like: pkill, pgrep, chmod, su, sudo, ssh, scp Full access to installs typically for Linux, such as RPM, DEB installs just like the target environments. The list could go on and on, but I want to get some feedback of anything that I may have missed, or even any disadvantages (of course there are some). To me it makes sense to migrate an entire team over to using Linux, and using Virtual Box, running Windows XP VM's to test functional items that 95% of most of the world uses. This is similar but a little different thread going on here as well. link text

    Read the article

  • Does CSS have a "start over" feature?

    - by Rick Wayne
    I'm using calendar_date_select (henceforth CDS) in a Rails application, and have a stupid question. When I embed the CDS component in the middle of an already-CSS-styled page, all manner of things go ugly-wrong with it (spacing, fonts, etc.). Clearly the elements inside the CDS have inherited unwanted stuff from the styles already working in the containing page. Now, I could use a combination of, say, Safari's CSS debugging and analyze what's wrong element-by-element. But that's (A) tedious, and (B) might load up my component's styles with tons of container-defeating special cases. If nothing else, I'm certain to change the containing page's styles in the future and would have to maintain the special cases. My question: Is is possible to have a DIV in a page that essentially backs out all the existing styling? Is there a simple one-liner that will do this? Failing that, can it be done on an element-by-element basis? E.g. I know what tags the CDS generates, so I could list each of them: { p: "#--NOTHING--#"; a: "#--NOTHING--#"; } where #--NOTHING--# is the Magic Turn Off All Inherited Styles incantation. http://code.google.com/p/calendardateselect/ Thanks, peeps.

    Read the article

  • emacs -- keybind questions

    - by user565739
    I have successfully used Ctrl+Shift+Up ' Ctrl+Shift+down ' Ctrl+Shift+left' Ctrl+Shift+Right to different commands. But when I tried to use Ctrl+s to the command save-buffer and Ctrl+Shift+s, which is equivalent to Ctrl+S, to another command, it has some problem. save-buffer works fine, but when I type Ctrl+Shift+s, it excute the command save-buffer. I used Ctrl+q to find the control sequences of Ctrl+s and Ctrl+Shift+S, I get the same result, which is ^S. I expect that I will get ^s for Ctrl+s, but it doesn't. Anyone knows the reason? Another queston is: I use Ctrl+c for the command killing-ring-save. In this case, all commands (which are of large number) begin with Ctrl+c don't work now. Is there a way to replace the prefix Ctrl+c by another customized prefix? I may pose my question in the wrong direction. I use ctrl+c as killing-ring-save. It works fine in emacs (no mode). But if I open a .c file (C-mode), then when I type Ctrl+c, it waits me to type another key. I think in this case, ctrl+c is regarded as a prefix. In this case, I need the following modifications: Using a custom defined prefix, say Ctrl+a, as Ctrl+c ; Remove the prefix Ctrl+c ; Using Ctrl+c as killing-ring-save. I add the following to my ~/.emacs : (defun my-c-initialization-hook () (define-key c-mode-base-map (kbd "C-a") mode-specific-map) (define-key c-mode-base-map (kbd "C-c") 'kill-ring-save)) (add-hook 'c-initialization-hook 'my-c-initialization-hook) But this doesn't work. Ctrl+c is still regarded as a prefix, so I can't use it as kill-ring-save. Furthermore, if I type Ctrl+a Ctrl+c, it said it's not defined. (I thought it will have the same result as I type Ctrl+c Ctrl+c)

    Read the article

  • PHP's openssl_sign generates different signature than SSCrypto's sign

    - by pascalj
    I'm writing an OS X client for a software that is written in PHP. This software uses a simple RPC interface to receive and execute commands. The RPC client has to sign the commands he sends to ensure that no MITM can modify any of them. However, as the server was not accepting the signatures I sent from my OS X client, I started investigating and found out that PHP's openssl_sign function generates a different signature for a given private key/data combination than the Objective-C SSCrypto framework (which is only a wrapper for the openssl lib): SSCrypto *crypto = [[SSCrypto alloc] initWithPrivateKey:self.localPrivKey]; NSData *shaed = [self sha1:@"hello"]; [crypto setClearTextWithData:shaed]; NSData *data = [crypto sign]; generates a signature like CtbkSxvqNZ+mAN... The PHP code openssl_sign("hello", $signature, $privateKey); generates a signature like 6u0d2qjFiMbZ+... (For my certain key, of course. base64 encoded) I'm not quite shure why this is happening and I unsuccessfully experimented with different hash-algorithms. As the PHP documentation states SHA1 is used by default. So why do these two functions generate different signatures and how can I get my Objective-C part to generate a signature that PHPs openssl_verify will accept? Note: I double checked that the keys and the data is correct!

    Read the article

  • Developing on both Windows & Linux machines simultaneously

    - by Jamie
    Sorry for the bad title (couldn't think of a better way to describe it) I have a windows machine which I do development on. However, I have a new project which needs to interact with a linux system (executing linux commands etc.). So, obviously I can't do development on my windows machine..and I don't wish to code on the dev machine, svn commit and then svn update it on the linux machine. Is there a way where any changes I make on my dev machine will be quickly mirrored to the linux machine? SVN is not a very quick alternative and of course some changes will be very minor. Any ideas? A network share I guess....but that's not very pretty (bit slow too). As fellow developers I would like to know if you've been in a similar situation and how you've resolved it. On a furthernote, I can't just install Ubuntu as my development machine and mirror the commands, applications etc. from the linux machine because it's a cluster 'master' machine and so therefore it has quite a special configuration. Thanks guys! EDIT: I've also thought about having web services on the linux machine and then just calling them from code thus seperating platform development dependency. What do you think about that too? thanks

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >