Search Results

Search found 5727 results on 230 pages for 'routed commands'.

Page 68/230 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Limiting ssh user account only to access his home directory!

    - by EBAGHAKI
    By reading some tutorials online I used these commands: Make a local group: net localgroup CopsshUsers /ADD Deny access to this group at top level: cacls c:\ /c /e /t /d CopsshUsers Open access to the copSSH installation directory: cacls copssh-inst-dir /c /e /t /r CopsshUsers Add Copssh user to the group above: net localgroup CopsshUsers mysshuser /add simply put these commands will try to create a usergroup that has no permission on your computer and it only have access to the copSSH Installation directory. This is not true, since you cannot change the permission on your windows directory, the third command won't remove access to windows folder (it says access denied on his log). Somehow I achieved that by taking ownership of Windows folder and then i execute the third command so CopsshUsers has no permissions on windows folder from now on. Now i tried to SSH to the server and it simply can't login! this is kind of funny because with permission on windows directory you can login and without it you can't!! So if you CAN SSH to the server somehow you know that you have access to the windows directory! (Is this really true??) Simple task: Limiting ssh user account only to access his home directory on WINDOWS and nothing else! Guys please help!

    Read the article

  • How to create refresh statements for TableAdapter objects in Visual Studio?

    - by Mark Wilkins
    I am working on developing an ADO.NET data provider and an associated DDEX provider. I am unable to convince the Visual Studio TableAdapater Configuration Wizard to generate SQL statements to refresh the data table after inserts and updates. It generates the insert and delete statements but will not produce the select statements to do the refresh. The functionality referred to can be accessed by dropping a table from the Server Explorer (inside Visual Studio) onto a DataSet (e.g., DataSet1.xsd). It creates a TableAdapter object and configures SELECT, UPDATE, DELETE, and INSERT statements. If you right click on the TableAdapter object, the context menu has a “Configure” option that starts the “TableAdapter Configuration Wizard”. The first dialog of that wizard has an Advanced Options button, which leads to an option titled “Refresh the data table”. When used with SQL Server tables, that option causes a statement of the form “select field1, field2, …” to be added on to the end of the commands for the TableAdapter’s InsertCommand and UpdateCommand. Do you have any idea what type property or interface might need to be exposed from the DDEX provider (or maybe the ADO.NET data provider) in order to make Visual Studio add those refresh statements to the update/insert commands? The MSDN documentation for the Advanced SQL Generation Options Dialog Box has a note stating, “Refreshing the data table is only supported on databases that support batching of SQL statements.” This seems to imply that a .NET data provider might need to expose some property indicating such behavior is supported. But I cannot find it. Any ideas?

    Read the article

  • Extracting shell script from parameterised Hudson job

    - by Jonik
    I have a parameterised Hudson job, used for some AWS deployment stuff, which in one build step runs certain shell commands. However, that script has become sufficiently complicated that I want to "extract" it from Hudson to a separate script file, so that it can easily be versioned properly. The Hudson job would then simply update from VCS and execute the external script file. My main question is about passing parameters to the script. I have a Hudson parameter named AMI_ID and a few others. The script references those params as if they were environment variables: echo "Using AMI $AMI_ID and type $TYPE" Now, this works fine inside Hudson, but not if Hudson calls an external script. Could I somehow make Hudson set the params as environment variables so that I don't need to change the script? Or is my best option to alter the script to take command line parameters (and possibly assign those to named variables for readability: ami_id=$1; type=$2; ... )? I tried something like this but the script doesn't get correctly replaced values: export AMI_ID=$AMI_ID export TYPE=$TYPE external-script.sh # this tries to use e.g. $AMI_ID Bonus question: when the script is inside Hudson, the "console output" will contain both the executed commands and their output. This is extremely useful for debugging when something goes wrong with a build! For example, here the line starting with "+" is part of the script and the following line its output: + ec2-associate-address -K pk.pem -C cert.pem 77.125.116.139 -i i-aa3487fd ADDRESS 77.125.116.139 i-aa3487fd When calling an external script, Hudson output will only contain the latter line, making debugging harder. I could cat the script file to stdout before running it, but that's not optimal either. In effect, I'd like a kind of DOS-style "echo on" for the script which I'm calling from Hudson - anyone know a trick to achieve this?

    Read the article

  • how could application installations/configurations be easier in linux? [closed]

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • When software problems reported are not really software problems

    - by AndyUK
    Hi Apologies if this has already been covered or you think it really belongs on wiki. I am a software developer at a company that manufactures microarray printing machines for the biosciences industry. I am primarily involved in interfacing with various bits of hardware (pneumatics, hydraulics, stepper motors, sensors etc) via GUI development in C++ to aspirate and print samples onto microarray slides. On joining the company I noticed that whenever there was a hardware-related problem this would cause the whole setup to freeze, with nobody being any the wiser as to what the specific problem was - hardware / software / misuse etc. Since then I have improved things somewhat by introducing software timeouts and exception handling to better identify and deal with any hardware-related problems that arise eg PLC commands not successfully completed, inappropriate FPGA response commands, and various other deadlock type conditions etc. In addition, the software will now log a summary of the specific problem, inform the user and exit the thread gracefully. This software is not embedded, just interfacing using serial ports. In spite of what has been achieved, non-software guys still do not fully appreciate that in these cases, the 'software' problem they are reporting to me is not really a software problem, rather the software is reporting a problem, but not causing it. Don't get me wrong, there is nothing I enjoy more than to come down on software bugs like a ton of bricks, and looking at ways of improving robustness in any way. I know the system well enough now that I almost have a sixth sense for these things. No matter how many times I try to explain this point to people, it does not really penetrate. They still report what are essentially hardware problems (which eventually get fixed) as software ones. I would like to hear from any others that have endured similar finger-pointing experiences and what methods they used to deal with them.

    Read the article

  • Rolling back file moves, folder deletes and mysql queries

    - by Workoholic
    This has been bugging me all day and there is no end in sight. When the user of my php application adds a new update and something goes wrong, I need to be able to undo a complex batch of mixed commands. They can be mysql update and insert queries, file deletes and folder renaming and creations. I can track the status of all insert commands and undo them if an error is thrown. But how do I do this with the update statements? Is there a smart way (some design pattern?) to keep track of such changes both in the file structure and the database? My database tables are MyISAM. It would be easy to just convert everything to InnoDB, so that I can use transactions. That way I would only have to deal with the file and folder operations. Unfortunately, I cannot assume that all clients have InnoDB support. It would also require me to convert many tables in my database to InnoDB, which I am hesitant to do.

    Read the article

  • Python subprocess: 64 bit windows server PIPE doesn't exist :(

    - by Spaceman1861
    I have a GUI that launches selected python scripts and runs it in cmd next to the gui window. I am able to get my launcher to work on my (windows xp 32 bit) laptop but when I upload it to the server(64bit windows iss7) I am running into some issues. The script runs, to my knowledge but spits back no information into the cmd window. My script is a bit of a Frankenstein that I have hacked and slashed together to get it to work I am fairly certain that this is a very bad example of the subprocess module. Just wondering if i could get a hand :). My question is how do i have to alter my code to work on a 64bit windows server. :) from Tkinter import * import pickle,subprocess,errno,time,sys,os PIPE = subprocess.PIPE if subprocess.mswindows: from win32file import ReadFile, WriteFile from win32pipe import PeekNamedPipe import msvcrt else: import select import fcntl def recv_some(p, t=.1, e=1, tr=5, stderr=0): if tr < 1: tr = 1 x = time.time()+t y = [] r = '' pr = p.recv if stderr: pr = p.recv_err while time.time() < x or r: r = pr() if r is None: if e: raise Exception(message) else: break elif r: y.append(r) else: time.sleep(max((x-time.time())/tr, 0)) return ''.join(y) def send_all(p, data): while len(data): sent = p.send(data) if sent is None: raise Exception(message) data = buffer(data, sent) The code above isn't mine def Run(): print filebox.get(0) location = filebox.get(0) location = location.__str__().replace(listbox.get(ANCHOR).__str__(),"") theTime = time.asctime(time.localtime(time.time())) lastbox.delete(0, END) lastbox.insert(END,theTime) for line in CookieCont: if listbox.get(ANCHOR) in line and len(line) > 4: line[4] = theTime else: "Fill In the rip Details to record the time" if __name__ == '__main__': if sys.platform == 'win32' or sys.platform == 'win64': shell, commands, tail = ('cmd', ('cd "'+location+'"',listbox.get(ANCHOR).__str__()), '\r\n') else: return "Please use contact admin" a = Popen(shell, stdin=PIPE, stdout=PIPE) print recv_some(a) for cmd in commands: send_all(a, cmd + tail) print recv_some(a) send_all(a, 'exit' + tail) print recv_some(a, e=0) The Code above is mine :)

    Read the article

  • [LaTeX] breaking an environment across pages - the smart way

    - by Flavius
    Hi I am using the exercise package to display exercises in a book. I have redefined some commands like this, which basically adds some space, a pencil, and two hrule's before and after the exercise: \renewcommand{\ExerciseHeader}{\vskip 1em\hrule\vskip 1em\centerline{\textbf{\large\smallpencil \ExerciseHeaderNB\ExerciseHeaderTitle% \ExerciseHeaderDifficulty\ExerciseHeaderOrigin\medskip}}} \makeatletter\def\endExerciseEnv{\termineliste{1}\@EndExeBox\vskip .5em\hrule\vskip 1em}\makeatother Now this works, but there's a small problem: There are situations where only the \hrule ends up being at the bottom of a page, and the rest of the exercise goes on the next page. There is also the opposite behavior: the entire exercise is on one page, except the \hrule in "endExerciseEnv", which is flushed to the next page. My question is: How to force the top hrule come either together with the header of the exercise (caption, title, whatever not) and at least the first two paragraphs (or \ExePath and a paragraph, or anything like that, but there must be at last "two things", so it doesn't look ugly), OR be flushed altogether, with the entire exercise? Similar question for the bottom hrule: How to force it have at least two items in front of it on the visible page where the hrule itself goes to? Any LaTeX guru who knows that? Addendum I have asked in the past LaTeX questions like this and I've got answers which required me to do stuff manually, like "insert a \vskip here and there" or such. Let me be clear: This is a book, there's lot of exercises, and I NEED it be done "automatically", by going the proper way of redeclaring commands & co.

    Read the article

  • Why does my perl server stop working when i press 'enter'?

    - by David
    I have created a server in perl that sends messages or commands to the client. I can send commands just fine, but when i am being prompted for the command on my server i have created, if i press 'enter', the server messes up. Why is this happening? Here is part of my code: print "\nConnection recieved from IP address $peer_address on port $peer_port "; $closed_message = "\n\tTerminated client session..."; while (1) { print "\nCommand: "; $send_data = <STDIN>; chop($send_data); if ($send_data eq 'e' or $send_data eq 'E' or $send_data eq ' E' or $send_data eq ' E ' or $send_data eq 'E ' or $send_data eq ' e' or $send_data eq ' e ' or $send_data eq 'e') { $client_socket->send ($send_data); close $client_socket; print "$closed_message\n"; &options; } else { $client_socket->send($send_data); } $client_socket->recv($recieved_data,8000); print "\nRecieved: $recieved_data"; } }

    Read the article

  • Advantages of Using Linux as primary developer desktop

    - by Nick N
    I want to get some input on some of the advantages of why developers should and need to use Linux as their primary development desktop on a daily basic as opposed to using Windows. This is particulary helpful when your Dev, QA, and Production environments are Linux. The current analogy that I keep coming back to is. If I build my demo car as a Ford Escort, but my project car is a Ford Mustang, it doesn't make sense at all. I'm currently at an IT department that allows dual boot with Windows and Linux, but some run Linux while the vast majority use Windows. Here's several advantages that I've came up with since using Linux as a primary desktop. Same Exact operating system as Dev, QA, and Production Same Scripts (.sh) instead of maintaining (.bat and *.sh). Somewhat mitigated by using cygwin, but still a bit different. Team learns simple commands such as: cd, ls, cat, top Team learns Advanced commands like: pkill, pgrep, chmod, su, sudo, ssh, scp Full access to installs typically for Linux, such as RPM, DEB installs just like the target environments. The list could go on and on, but I want to get some feedback of anything that I may have missed, or even any disadvantages (of course there are some). To me it makes sense to migrate an entire team over to using Linux, and using Virtual Box, running Windows XP VM's to test functional items that 95% of most of the world uses. This is similar but a little different thread going on here as well. link text

    Read the article

  • PHP's openssl_sign generates different signature than SSCrypto's sign

    - by pascalj
    I'm writing an OS X client for a software that is written in PHP. This software uses a simple RPC interface to receive and execute commands. The RPC client has to sign the commands he sends to ensure that no MITM can modify any of them. However, as the server was not accepting the signatures I sent from my OS X client, I started investigating and found out that PHP's openssl_sign function generates a different signature for a given private key/data combination than the Objective-C SSCrypto framework (which is only a wrapper for the openssl lib): SSCrypto *crypto = [[SSCrypto alloc] initWithPrivateKey:self.localPrivKey]; NSData *shaed = [self sha1:@"hello"]; [crypto setClearTextWithData:shaed]; NSData *data = [crypto sign]; generates a signature like CtbkSxvqNZ+mAN... The PHP code openssl_sign("hello", $signature, $privateKey); generates a signature like 6u0d2qjFiMbZ+... (For my certain key, of course. base64 encoded) I'm not quite shure why this is happening and I unsuccessfully experimented with different hash-algorithms. As the PHP documentation states SHA1 is used by default. So why do these two functions generate different signatures and how can I get my Objective-C part to generate a signature that PHPs openssl_verify will accept? Note: I double checked that the keys and the data is correct!

    Read the article

  • emacs -- keybind questions

    - by user565739
    I have successfully used Ctrl+Shift+Up ' Ctrl+Shift+down ' Ctrl+Shift+left' Ctrl+Shift+Right to different commands. But when I tried to use Ctrl+s to the command save-buffer and Ctrl+Shift+s, which is equivalent to Ctrl+S, to another command, it has some problem. save-buffer works fine, but when I type Ctrl+Shift+s, it excute the command save-buffer. I used Ctrl+q to find the control sequences of Ctrl+s and Ctrl+Shift+S, I get the same result, which is ^S. I expect that I will get ^s for Ctrl+s, but it doesn't. Anyone knows the reason? Another queston is: I use Ctrl+c for the command killing-ring-save. In this case, all commands (which are of large number) begin with Ctrl+c don't work now. Is there a way to replace the prefix Ctrl+c by another customized prefix? I may pose my question in the wrong direction. I use ctrl+c as killing-ring-save. It works fine in emacs (no mode). But if I open a .c file (C-mode), then when I type Ctrl+c, it waits me to type another key. I think in this case, ctrl+c is regarded as a prefix. In this case, I need the following modifications: Using a custom defined prefix, say Ctrl+a, as Ctrl+c ; Remove the prefix Ctrl+c ; Using Ctrl+c as killing-ring-save. I add the following to my ~/.emacs : (defun my-c-initialization-hook () (define-key c-mode-base-map (kbd "C-a") mode-specific-map) (define-key c-mode-base-map (kbd "C-c") 'kill-ring-save)) (add-hook 'c-initialization-hook 'my-c-initialization-hook) But this doesn't work. Ctrl+c is still regarded as a prefix, so I can't use it as kill-ring-save. Furthermore, if I type Ctrl+a Ctrl+c, it said it's not defined. (I thought it will have the same result as I type Ctrl+c Ctrl+c)

    Read the article

  • MMS2R and Multiple Images Rails

    - by Maletor
    Here's my code: require 'mms2r' class IncomingMailHandler < ActionMailer::Base ## # Receives email(s) from MMS-Email or regular email and # uploads that content the user's photos. # TODO: Use beanstalkd for background queueing and processing. def receive(email) begin mms = MMS2R::Media.new(email) ## # Ok to find user by email as long as activate upon registration. # Remember to make UI option that users can opt out of registration # and either not send emails or send them to a [email protected] # type address. ## # Remember to get SpamAssasin if (@user = User.find_by_email(email.from) && email.has_attachments?) mms.media.each do |key, value| if key.include?('image') value.each do |file| @user.photos.push Photo.create!(:uploaded_data => File.open(file), :title => email.subject.empty? ? "Untitled" : email.subject) end end end end ensure mms.purge end end end and here's my error: /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/runner.rb:48: undefined method photos' for true:TrueClass (NoMethodError) from /usr/home/xxx/app/models/incoming_mail_handler.rb:23:in each' from /usr/home/xxx/app/models/incoming_mail_handler.rb:23:in receive' from /usr/home/xxx/app/models/incoming_mail_handler.rb:21:in each' from /usr/home/xxx/app/models/incoming_mail_handler.rb:21:in receive' from /usr/local/lib/ruby/gems/1.8/gems/actionmailer-2.3.4/lib/action_mailer/base.rb:419:in receive' from (eval):1 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in eval' from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/runner.rb:48 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /home/xxx/script/runner:3 I sent an email to the server with two image attachments. Upon receiving the email the server runs "| ruby /xxx/script/runner 'IncomingMailHandler.receive STDIN.read'" What is going on? What am I doing wrong? MMS2R docs are here: http://mms2r.rubyforge.org/mms2r/

    Read the article

  • Developing on both Windows & Linux machines simultaneously

    - by Jamie
    Sorry for the bad title (couldn't think of a better way to describe it) I have a windows machine which I do development on. However, I have a new project which needs to interact with a linux system (executing linux commands etc.). So, obviously I can't do development on my windows machine..and I don't wish to code on the dev machine, svn commit and then svn update it on the linux machine. Is there a way where any changes I make on my dev machine will be quickly mirrored to the linux machine? SVN is not a very quick alternative and of course some changes will be very minor. Any ideas? A network share I guess....but that's not very pretty (bit slow too). As fellow developers I would like to know if you've been in a similar situation and how you've resolved it. On a furthernote, I can't just install Ubuntu as my development machine and mirror the commands, applications etc. from the linux machine because it's a cluster 'master' machine and so therefore it has quite a special configuration. Thanks guys! EDIT: I've also thought about having web services on the linux machine and then just calling them from code thus seperating platform development dependency. What do you think about that too? thanks

    Read the article

  • wscript.shell running file with space in path with PHP

    - by ermac2014
    I was trying to use wscript.shell through COM objects with php to pass some cmd commands to cURL library (the DOS version). here is what I use to perform this task: function windExec($cmd,$mode=''){ // Setup the command to run from "run" $cmdline = "cmd /C $cmd"; // set-up the output and mode if ($mode=='FG'){ $outputfile = uniqid(time()) . ".txt"; $cmdline .= " > $outputfile"; $m = true; } else $m = false; // Make a new instance of the COM object $WshShell = new COM("WScript.Shell"); // Make the command window but dont show it. $oExec = $WshShell->Run($cmdline, 0, $m); if ($outputfile){ // Read the tmp file. $retStr = file_get_contents($outputfile); // Delete the temp_file. unlink($outputfile); } else $retStr = ""; return $retStr; } now when I run this function like: windExec("\"C:/Documents and Settings/ermac/Desktop/my project/curl\" http://www.google.com/", 'FG'); curl doesn't run because there is a problem with the path. but when I remove the spaces from the path it works great. windExec("\"C:/curl\" http://www.google.com/", 'FG'); so my question is how can I escape these spaces in wscript.shell commands? is there anyway I can fix this? thanks in advance :)

    Read the article

  • Including partial views when applying the Mode-View-ViewModel design pattern

    - by Filip Ekberg
    Consider that I have an application that just handles Messages and Users I want my Window to have a common Menu and an area where the current View is displayed. I can only work with either Messages or Users so I cannot work simultaniously with both Views. Therefore I have the following Controls MessageView.xaml UserView.xaml Just to make it a bit easier, both the Message Model and the User Model looks like this: Name Description Now, I have the following three ViewModels: MainWindowViewModel UsersViewModel MessagesViewModel The UsersViewModel and the MessagesViewModel both just fetch an ObserverableCollection<T> of its regarding Model which is bound in the corresponding View like this: <DataGrid ItemSource="{Binding ModelCollection}" /> The MainWindowViewModel hooks up two different Commands that have implemented ICommand that looks something like the following: public class ShowMessagesCommand : ICommand { private ViewModelBase ViewModel { get; set; } public ShowMessagesCommand (ViewModelBase viewModel) { ViewModel = viewModel; } public void Execute(object parameter) { var viewModel = new ProductsViewModel(); ViewModel.PartialViewModel = new MessageView { DataContext = viewModel }; } public bool CanExecute(object parameter) { return true; } public event EventHandler CanExecuteChanged; } And there is another one a like it that will show Users. Now this introduced ViewModelBase which only holds the following: public UIElement PartialViewModel { get { return (UIElement)GetValue(PartialViewModelProperty); } set { SetValue(PartialViewModelProperty, value); } } public static readonly DependencyProperty PartialViewModelProperty = DependencyProperty.Register("PartialViewModel", typeof(UIElement), typeof(ViewModelBase), new UIPropertyMetadata(null)); This dependency property is used in the MainWindow.xaml to display the User Control dynamicly like this: <UserControl Content="{Binding PartialViewModel}" /> There are also two buttons on this Window that fires the Commands: ShowMessagesCommand ShowUsersCommand And when these are fired, the UserControl changes because PartialViewModel is a dependency property. I want to know if this is bad practice? Should I not inject the User Control like this? Is there another "better" alternative that corresponds better with the design pattern? Or is this a nice way of including partial views?

    Read the article

  • How to make Solution Explorer behave after clearing search?

    - by stijn
    I currently have a VS installation with no extensions to see how that works out. For navigation that means making heavy use of Ctrl+; aka Search Solution Explorer. While the search itself is ok, it has one major drawback for me that makes it a pain to use for me (both with keyboard and mouse): Solution with two projects, one collapsed, one opened: Use Ctrl+; and start typing until match found from collapsed project What I want now is to simply clear the search and return to the previous view. Seems like a pretty standard requirement, no? But there seems to be no such functionality built in. Problem with the current commands that come close (pressing Esc, clicking Back or Home buttons in Solution Explorer Toolbar) is all the same: they have the extremely annoying behaviour that they insist on suddenly uncollapsing the previously collapsed project and track the match found! (Btw the Track Active Item in Solution Explorer option is turned of in the options). This makes no sense from a UX point of view? You select some kind of 'undo' command, the search box clears which is expected, but then suddenly there's an item visible from a previous search: So if the collapsed project has like 50 items in it, solution explorer is now useless visually since it litters the screen with stuff you don't want to see, and worse you have to manually collapse the project again to return to the previous view. Is there a way around this? I thought maybe keyboard shortcuts for Back/Home would be different, but the commands do not seem to be registered. I looked into EnvDTE80.DTE2.ToolWindows.SolutionExplorer but it has no properties/methods that have anything to do with this issue. And somewhere in the tree there is a Microsoft.VisualStudio.PlatformUI.SolutionPivotNavigator which is probably the class responsible for this behaviour, but I have no idea how to access it?

    Read the article

  • UnicodeEncodeError: 'ascii' codec can't encode character [...]

    - by user1461135
    I have read the HOWTO on Unicode from the official docs and a full, very detailed article as well. Still I don't get it why it throws me this error. Here is what I attempt: I open an XML file that contains chars out of ASCII range (but inside allowed XML range). I do that with cfg = codecs.open(filename, encoding='utf-8, mode='r') which runs fine. Looking at the string with repr() also shows me a unicode string. Now I go ahead and read that with parseString(cfg.read().encode('utf-8'). Of course, my XML file starts with this: <?xml version="1.0" encoding="utf-8"?>. Although I suppose it is not relevant, I also defined utf-8 for my python script, but since I am not writing unicode characters directly in it, this should not apply here. Same for the following line: from __future__ import unicode_literals which also is right at the beginning. Next thing I pass the generated Object to my own class where I read tags into variables like this: xmldata.getElementsByTagName(tagName)[0].firstChild.data and assign it to a variable in my class. Now what perfectly works are those commands (obj is an instance of the class): for element in obj: print element And this command does work as well: print obj.__repr__() I defined __iter__() to just yield every variable while __repr__() uses the typical printf stuff: "%s" % self.varname Both commands print perfectly and can output the unicode character. What does not work is this: print obj And now I am stuck because this throws the dreaded UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 47: So what am I missing? What am I doing wrong? I am looking for a general solution, I always want to handle strings as unicode, just to avoid any possible errors and write a compatible program. Edit: I also defined this: def __str__(self): return self.__repr__() def __unicode__(self): return self.__repr__() From documentation I got that this

    Read the article

  • Need help regarding lwuit

    - by rajiv
    Hi, I have project, already developed using canvas and lib used is LCDUI. It's for nokia keyboard supported devices. Now I want to incorporate same application for touch devices. I have used touch methods like pointerpressed, etc. For normal functionality that worked pretty well. But it creates problem in commands. My application is in fullscreen mode. Commands I have created using user defined menu list. Probles is that I can not directly identify that which command has been clicked. Setting coordinates for every command is not thr feasible solution for me. I come across the new lib LWUIT, but i found out that it supports only forms(Can't we use on canvas?). and integrating LCDUI and LWUIT is also not possible(please give suggestion that can we use both in same application?). Is it possible to create form under canvas itself? Any other lib support available? thank you.

    Read the article

  • MS Office Excel Ribbon - Cannot change/hide Editing group in Home tab

    - by A9S6
    I have a .net addin for Excel. The addin creates the Ribbon UI for Excel 2007 and re-purposes some existing commands such as Cut, Copy, Paste, Sort etc. For Cut, Copy and Paste I am just overriding their OnAction value to call my own procedure when the buttons are clicked. But for Sort, Sort Asc and Sort Desc commands the case is a little different. When either of the Sort, Sort Asc or Sort Desc buttons are clicked, I want to get notified and then call the default functionality. This was possible in Excel 2003 commandsbars by calling the Execute() method on the CommandBarControl. In Excel 2007, there is a ExecuteMso() method to programmatically click a ribbon element but when the OnAction is overridden, this ExecuteMso() method just executes my own procedure and not the default functionality of that button. So I thought that I will HIDE the Sort buttons in the "Editing" group in Home tab and add my own Sort, Sort Asc and Sort Desc buttons to it. The buttons will call into my procedure first from where I will call the default behavior. Now the problem is that I am unable to change/hide the Editing group (idMso="GroupEditing"). Is this built-in group not editable? I can however HIDE the Clipboard and other groups(but can't add buttons to them). <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <customUI xmlns="http://schemas.microsoft.com/office/2006/01/customui"> <ribbon> <tabs> <tab idMso="TabHome"> <group idMso="GroupEditing" visible="false" /> </tab> </tabs> </ribbon> </customUI>

    Read the article

  • Matlab set defaultTextInterpreter to LaTeX

    - by Maurits
    I am running Matlab R2010A on OS X 10.7.5 I have a simple matlab plot and would like to use LaTeX commands in the axis and legend. However setting: set(0, 'defaultTextInterpreter', 'latex'); Has zero effect, and results in a TeX warning that my tex commands can not be parsed. If I open plot tools of this plot, the default interpreter is set to 'TeX'. Manually setting this to 'LaTeX' obviously fixes this, but I can't do this for hundreds of plots. Now, if I retrieve the default interpreter via the Matlab prompt, i.e get(0,'DefaultTextInterpreter') It says 'LaTeX', but again, when I look in the properties of the figure via the plot tools menu, the interpreter remains set to 'TeX'. Complete plotting code: figure f = 'somefile.eps' set(0, 'defaultTextInterpreter', 'latex'); ms = 8; fontSize = 18; loglog(p_m_sip, p_fa_sip, 'ko-.', 'LineWidth', 2, 'MarkerSize', ms); hold on; xlabel('$P_{fa}$', 'fontsize', fontSize); ylabel('$P_{m}$', 'fontsize', fontSize); legend('$\textbf{K}_{zz}$', 'Location', 'Best'); set(gca, 'XMinorTick', 'on', 'YMinorTick', 'on', 'YGrid', 'on', 'XGrid', 'on'); print('-depsc2', f);

    Read the article

  • Problem uploading app to google app engine

    - by Oberon
    I'm having problems uploading an app to the google-app-engine from my work place. I believe the problem is related to proxy, because I do not see the same problem when following the same procedure from home. (I do not specify HTTP_PROXY from home). These are the commands I run: HTTP_PROXY=http://proxy.<thehostname>.com:8080 HTTP_PROXY=https://proxy.<thehostname>.com:8080 appcfg.py --insecure update myappfolder When running the commands I get prompted for email and password, as expected, but after that it immediately exits with this errormessage: Error 302: --- begin server output --- <HTML> <HEAD> <TITLE>Moved Temporarily</TITLE> </HEAD> <BODY BGCOLOR="#FFFFFF" TEXT="#000000"> <H1>Moved Temporarily</H1> The document has moved <A HREF="https://www.google.com/accounts/ClientLogin">here</A>. </BODY> </HTML> --- end server output --- Note: I added the --insecure option because else it gave a warning of missing ssl module. Any idea how to solve or workaround this problem?

    Read the article

  • Developing on a windows machine that interacts with a linux system

    - by Jamie
    Sorry for the bad title (couldn't think of a better way to describe it) I have a windows machine which I do development on. However, I have a new project which needs to interact with a linux system (executing linux commands etc.). So, obviously I can't do development on my windows machine..and I don't wish to code on the dev machine, svn commit and then svn update it on the linux machine. Is there a way where any changes I make on my dev machine will be quickly mirrored to the linux machine? SVN is not a very quick alternative and of course some changes will be very minor. Any ideas? A network share I guess....but that's not very pretty (bit slow too). As fellow developers I would like to know if you've been in a similar situation and how you've resolved it. On a furthernote, I can't just install Ubuntu as my development machine and mirror the commands, applications etc. from the linux machine because it's a cluster 'master' machine and so therefore it has quite a special configuration. Thanks guys! EDIT: I've also thought about having web services on the linux machine and then just calling them from code thus seperating platform development dependency. What do you think about that too? thanks

    Read the article

  • multiple clients - one server connection with sockets tcp/ip c# .net

    - by jagse
    Hello guys, I need to develop a client server system where I can have multiple clients communicating with one server at the same time. I want to communicate xml serialized objects and also need to send and receive other commands to invoke methods. Now, I am just starting with socket programming in C# and .Net and found that the asynchronous I/O is the way to go so that the methods dont block the execution of code. Also there are many examples of how to make a simple client server system. So I have a basic understanding of how that works. Anyway, what still is not clear to me is how I can set up a server which can manage connections to multiple clients? Can I just create a new socket per connection and then store those in some kind of list? Do I need some kind of multiplexing to achieve this? Do I have to listen at multiple ports? What`s the best way here? And the other thing is if I need to develop my own protocol to differentiate between what I am actually sending over the network -- xml serialized object or a command which might be just a string encoded in ascII or something. Or would I develop my own protocol just to send these commands? Any kind of help is apreciated! If someone knows a good book which covers this sort of stuff, let me know. Cheers I forgot to mention that some of my clients which are supposed to communicate with my server will be pda and I therefore use the compact framework... So this might bring in some restrictions...

    Read the article

  • Design an Application That Stores and Processes Files

    - by phasetwenty
    I'm tasked with writing an application that acts as a central storage point for files (usually document formats) as provided by other applications. It also needs to take commands like "file 395 needs a copy in X format", at which point some work is offloaded to a 3rd party application. I'm having trouble coming up with a strategy for this. I'd like to keep the design as simple as possible, so I'd like to avoid big extra frameworks or techniques like threads for as long as it makes sense. The clients are expected to be web applications (for example, one is a django application that receives files from our customers; the others are not yet implemented). The platform it will be running on is likely going to be Python on Linux, unless I have a strong argument to use something else. In the beginning I thought I could fit the information I wanted to communicate in the filenames, and let my application parse the filename to figure out what it needed to do, but this is proving too inflexible with the amount of information I'm realizing I need to make available. Another idea is to pair FTP with a database used as a communication medium (client uploads a file and updates the database with a command as a row in a table) but I don't like this idea because adding commands (a known change) looks like it will require adding code as well as changing database schemas. It will also muddy up the interface my clients will have to use. I looked into Pyro to let applications communicate more directly but I don't like the idea of running an extra nameserver for this one purpose. I also don't see a good way to do file transfer within this framework. What I'm looking for is techniques and/or technologies applicable to my problem. At the simplest level, I need the ability to accept files and messages with them.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >