Search Results

Search found 11111 results on 445 pages for 'fahnz mode'.

Page 101/445 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Kernel api's or using api's in the kernel

    - by user513647
    Hello everybody I'd like to know if and how I can access api calls inside the kernel. I need them to preform several integrity checks on a program of mine running in user mode. But I don't know how I can access the api's and funcions required to do so. Does anybody know how to obtain the process id of my user mode proces? and how to access all it's memory to preform the check? Thanks in advance ps: My I'm on a windows xp machine

    Read the article

  • What's the difference between the ruby irb prompt modes?

    - by Steven
    I can change the irb prompt mode with irb --prompt prompt-mode I can see what null and simple does, but I can't tell the difference between null and xmp and the difference between default/classic/inf-ruby. Can someone explain to me what these other modes do? It seems pointless to have multiple modes doing the same thing.

    Read the article

  • Using mcrypt to pass data across a webservice is failing

    - by adam
    Hi I'm writing an error handler script which encrypts the error data (file, line, error, message etc) and passes the serialized array as a POST variable (using curl) to a script which then logs the error in a central db. I've tested my encrypt/decrypt functions in a single file and the data is encrypted and decrypted fine: define('KEY', 'abc'); define('CYPHER', 'blowfish'); define('MODE', 'cfb'); function encrypt($data) { $td = mcrypt_module_open(CYPHER, '', MODE, ''); $iv = mcrypt_create_iv(mcrypt_enc_get_iv_size($td), MCRYPT_RAND); mcrypt_generic_init($td, KEY, $iv); $crypttext = mcrypt_generic($td, $data); mcrypt_generic_deinit($td); return $iv.$crypttext; } function decrypt($data) { $td = mcrypt_module_open(CYPHER, '', MODE, ''); $ivsize = mcrypt_enc_get_iv_size($td); $iv = substr($data, 0, $ivsize); $data = substr($data, $ivsize); if ($iv) { mcrypt_generic_init($td, KEY, $iv); $data = mdecrypt_generic($td, $data); } return $data; } echo "<pre>"; $data = md5(''); echo "Data: $data\n"; $e = encrypt($data); echo "Encrypted: $e\n"; $d = decrypt($e); echo "Decrypted: $d\n"; Output: Data: d41d8cd98f00b204e9800998ecf8427e Encrypted: ê÷#¯KžViiÖŠŒÆÜ,ÑFÕUW£´Œt?†÷>c×åóéè+„N Decrypted: d41d8cd98f00b204e9800998ecf8427e The problem is, when I put the encrypt function in my transmit file (tx.php) and the decrypt in my recieve file (rx.php), the data is not fully decrypted (both files have the same set of constants for key, cypher and mode). Data before passing: a:4:{s:3:"err";i:1024;s:3:"msg";s:4:"Oops";s:4:"file";s:46:"/Applications/MAMP/htdocs/projects/txrx/tx.php";s:4:"line";i:80;} Data decrypted: Mª4:{s:3:"err";i:1024@7OYªç`^;g";s:4:"Oops";s:4:"file";sôÔ8F•Ópplications/MAMP/htdocs/projects/txrx/tx.php";s:4:"line";i:80;} Note the random characters in the middle. My curl is fairly simple: $ch = curl_init($url); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, 'data=' . $data); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); $output = curl_exec($ch); Things I suspect could be causing this: Encoding of the curl request Something to do with mcrypt padding missing bytes I've been staring at it too long and have missed something really really obvious If I turn off the crypt functions (so the transfer tx-rx is unencrypted) the data is received fine. Any and all help much appreciated! Thanks, Adam

    Read the article

  • How do I disable system pop-ups in Windows CE 6?

    - by Ben Schoepke
    What do I have to do to disable all system pop-ups in WinCE 6 R2? I read Mike Hall's post about Kiosk mode [1] but that's not going to work for us because we still want the standard graphical Explorer shell. We plan on hiding the taskbar and start menu and clearing icons off the desktop but need an easy way to make sure that no pop-ups of any type will ever show up on top of our app. Thanks, Ben http://blogs.msdn.com/mikehall/archive/2007/06/01/kiosk-mode-for-ce-6-0.aspx

    Read the article

  • Dependency Injection: I don't get where to start!

    - by Andy
    I have several articles about Dependency Injection, and I can see the benefits, especially when it comes to unit testing. The units can me loosely coupled, and mocking of dependencies can be made. The trouble is - I just don't get where to start. Consider this snippet below of (much edited for the purpose of this post) code that I have. I am instantiating a Plc object from the main form, and passing in a communications mode via the Connect method. In it's present form it becomes hard to test, because I can't isolate the Plc from the CommsChannel to unit test it. (Can I?) The class depends on using a CommsChannel object, but I am only passing in a mode that is used to create this channel within the Plc itself. To use dependancy injection, I should really pass in an already created CommsChannel (via an 'ICommsChannel' interface perhaps) to the Connect method, or maybe via the Plc constructor. Is that right? But then that would mean creating the CommsChannel in my main form first, and this doesn't seem right either, because it feels like everything will come back to the base layer of the main form, where everything begins. Somehow it feels like I am missing a crucial piece of the puzzle. Where do you start? You have to create an instance of something somewhere, but I'm struggling to understand where that should be. public class Plc() { public bool Connect(CommsMode commsMode) { bool success = false; // Create new comms channel. this._commsChannel = this.GetCommsChannel(commsMode); // Attempt connection success = this._commsChannel.Connect(); return this._connected; } private CommsChannel GetCommsChannel(CommsMode mode) { CommsChannel channel; switch (mode) { case CommsMode.RS232: channel = new SerialCommsChannel( SerialCommsSettings.Default.ComPort, SerialCommsSettings.Default.BaudRate, SerialCommsSettings.Default.DataBits, SerialCommsSettings.Default.Parity, SerialCommsSettings.Default.StopBits); break; case CommsMode.Tcp: channel = new TcpCommsChannel( TCPCommsSettings.Default.IP_Address, TCPCommsSettings.Default.Port); break; default: // Throw unknown comms channel exception. } return channel; } }

    Read the article

  • Has anyone gotten VB highlighting to work in Emacs23?

    - by akoumjian
    I have found and installed the visual basic mode for emacs. It seems to be loading on emacs startup and the VB mode loads when I open a *.bas file. The code is not highlighted at all, however. I'm using Emacs23, tried it with 21 and saw no difference. Background for the curious: I am rewriting a set of codes from VB to Python. The syntax highlighting will make it much easier for me to see what's going on.

    Read the article

  • How to set a keybinding which is valid in all modes in Emacs

    - by AnotherEmacsLearner
    Hi, I've configured my emacs to use M-j as backward-char by (global-set-key (kbd "M-j") 'backward-char) ; was indent-new-comment-line in my .emacs file. This works fine in many modes (text/org/lisp). But in c++-mode & php-mode it is bound to the default c-indent-new-comment-line How can I bind M-j to use backward-char in these modes too. And in general for ALL modes. Thanks, AnotherEmacsLearner

    Read the article

  • Can't see how to compile a header file with /clr switch in a mixed class

    - by MattN
    When creating a mixed mode class, on compilation the header file complains that it needs to be complied with the /clr switch as it is a mixed mode class, however, I can't see any option to specifically compile that header with /clr from visual studio, and I don't want to set the entire project with a /clr flag, Does anyone know how I can specify that this header file is compiled correctly with /clr? Thanks in advance!

    Read the article

  • python UTF16LE file to UTF8 encoding

    - by Qiao
    I have big file with utf16le (BOM) encoding. Is it possible to convert it to usual UTF8 by python? Something like file_old = open('old.txt', mode='r', encoding='utf_16_le') file_new = open('new.txt', mode='w', encoding='utf-8') text = file_old.read() file_new.write(text.encode('utf-8')) http://docs.python.org/release/2.3/lib/node126.html (-- utf_16_le UTF-16LE) Not working. Can't understand "TypeError: must be str, not bytes" error. python 3

    Read the article

  • Issue with binding Collection type of dependency property in style

    - by user344101
    Hi, I have a customcontrol exposing a Dependency property of type ObservableCollection. When i bind this properrty directly as part ofthe control's mark up in hte containing control everythihng works fine /< temp:EnhancedTextBox CollectionProperty="{Binding Path=MyCollection, Mode=TwoWay}"/ But when i try to do the binding in the style created for the control it fails, /< Style x:Key="abc2" TargetType="{x:Type temp:EnhancedTextBox}" <Setter Property="CollectionProperty" Value="{Binding Path=MyCollection, Mode=TwoWay}"/> Please help !!!!! Thanks

    Read the article

  • Emacs, C++ code completion for vectors

    - by Caglar Toklu
    Hi, I am new to Emacs, and I have the following code as a sample. I have installed GNU Emacs 23.1.1 (i386-mingw-nt6.1.7600), installed cedet-1.0pre7.tar.gz. , installed ELPA, and company. You can find my simple Emacs configuration at the bottom. The problem is, when I type q[0] in main() and press . (dot), I see the 37 members of the vector, not Person although first_name and last_name are expected. The completion works as expected in the function greet() but it has nothing to do with vector. My question is, how can I accomplish code completion for vector elements too? #include <iostream> #include <vector> using namespace std; class Person { public: string first_name; string last_name; }; void greet(Person a_person) { // a_person.first_name is completed as expected! cout << a_person.first_name << "|"; cout << a_person.last_name << endl; }; int main() { vector<Person> q(2); Person guy1; guy1.first_name = "foo"; guy1.last_name = "bar"; Person guy2; guy2.first_name = "stack"; guy2.last_name = "overflow"; q[0] = guy1; q[1] = guy2; greet(guy1); greet(guy2); // cout q[0]. I want to see first_name or last_name here! } My Emacs configuration: ;;; This was installed by package-install.el. ;;; This provides support for the package system and ;;; interfacing with ELPA, the package archive. ;;; Move this code earlier if you want to reference ;;; packages in your .emacs. (when (load (expand-file-name "~/.emacs.d/elpa/package.el")) (package-initialize)) (load-file "~/.emacs.d/cedet/common/cedet.el") (semantic-load-enable-excessive-code-helpers) (require 'semantic-ia) (global-srecode-minor-mode 1) (semantic-add-system-include "/gcc/include/c++/4.4.2" 'c++-mode) (semantic-add-system-include "/gcc/i386-pc-mingw32/include" 'c++-mode) (semantic-add-system-include "/gcc/include" 'c++-mode) (defun my-semantic-hook () (imenu-add-to-menubar "TAGS")) (add-hook 'semantic-init-hooks 'my-semantic-hook)

    Read the article

  • Building 64bit Qt on 32bit Xp computer

    - by photo_tom
    I'm trying to build Qt in a shared 64 bit mode on my 32bit XP system. I can configure the QMake and start the 64bit build. The problem is that when the build starts, the first thing that happens in that the process builds ui, moc and rcc utility compilers in 64 bit mode, then tries to run them on my 32bit machine. Does anyone know how to configure the build so that it does not build those compilers first?

    Read the article

  • problem with presentModalViewController

    - by pbcoder
    If my iPad is in Landscape mode and presentModalViewController is called the view automatically turns into portrait mode. Any solutions? UIViewController * start = [[UIViewController alloc]initWithNibName:@"SecondView" bundle:nil]; start.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; start.modalPresentationStyle = UIModalPresentationFormSheet; [self presentModalViewController:start animated:YES]; In SecondView I've already added: - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; }

    Read the article

  • What's the best way to draw a fullscreen quad in OpenGL 3.2?

    - by Phineas
    I'm doing ray casting in the fragment shader. I can think of a couple ways to draw a fullscreen quad for this purpose. Either draw a quad in clip space with the projection matrix set to the identity matrix, or use the geometry shader to turn a point into a triangle strip. The former uses immediate mode, deprecated in OpenGL 3.2. The latter I use out of novelty, but it still uses immediate mode to draw a point.

    Read the article

  • Why might my Emacs use spaces instead of tabs?

    - by Fletcher Moore
    I am trying to diagnose this problem. TAB creates 4 spaces instead of a 4 col TAB like I want. But I don't think it should because C-h v indent-tabs-mode on the buffer in question says it is set to t. When I check my keybindings, TAB is set to c-indent-line-or-region. Does this function ignore my tabs-mode?

    Read the article

  • Tiny MCE - set full-screen when editor is loaded?

    - by Martin
    I use tiny_mce as word-like editor in textareas that are in iframes. I want to use whole iframe space. TinyMCE has a fullscreen button, but I need to set full-screen mode automatically when plugin has loaded. Is there a function/trigger to call this mode (or the button)? Thanks for help.

    Read the article

  • How to Highlight image based on selection in JSF

    - by Paul
    The images are displayed horizontally using . At first no images are selected if selected it has to be highlighted. If user changes the selection the old has to go normal and new has to be highlighted. while in edit mode we need to highlight the already selected image and they may change in this mode so highlight should change. Bean is in session scope. pls provide some solution.

    Read the article

  • buttons inside scrollviewer problem

    - by Miroslav Valchev
    Hello, everyone. I couldn't find a solution to my problem eventhough I believe that others have come across this too. Basically, there are like twenty buttons in a wrap panel, which is inside a scrollviewer. The problem is that when I want to scroll the list, the click event fires the triggers. Really would appreciate help on this one. <ScrollViewer> <ScrollViewer.Content> <toolkit:WrapPanel Orientation="Horizontal" HorizontalAlignment="Left" VerticalAlignment="Top" Width="420"> <Button Style="{StaticResource imageButtonStyle}" > <i:Interaction.Triggers> <i:EventTrigger EventName="Click"> <cmd2:EventToCommand Command="{Binding SelectCommand, Mode=OneWay}" CommandParameterValue="1" /> </i:EventTrigger> </i:Interaction.Triggers> </Button> <Button Style="{StaticResource imageButtonStyle}"> <i:Interaction.Triggers> <i:EventTrigger EventName="Click"> <cmd2:EventToCommand Command="{Binding SelectCommand, Mode=OneWay}" CommandParameterValue="2" /> </i:EventTrigger> </i:Interaction.Triggers> </Button> <Button Style="{StaticResource imageButtonStyle}"> <i:Interaction.Triggers> <i:EventTrigger EventName="MouseEnter"> <cmd2:EventToCommand Command="{Binding SelectCommand, Mode=OneWay}" CommandParameterValue="3" /> </i:EventTrigger> </i:Interaction.Triggers> </Button> <Button Style="{StaticResource imageButtonStyle}"> <i:Interaction.Triggers> <i:EventTrigger EventName="MouseEnter"> <cmd2:EventToCommand Command="{Binding SelectCommand, Mode=OneWay}" CommandParameterValue="4" /> </i:EventTrigger> </i:Interaction.Triggers> </Button> </toolkit:WrapPanel> </ScrollViewer.Content>

    Read the article

  • WCF App using Peer Chat app as example does not work.

    - by splate
    I converted a VB .Net 3.5 app to use peer to peer WCF using the available Microsoft example of the Chat app. I made sure that I copied the app.config file for the sample(modified the names for my app), added the appropriate references. I followed all the tutorials and added the appropriate tags and structure in my app code. Everything runs without any errors, but the clients only get messages from themselves and not from the other clients. The sample chat application does run just fine with multiple clients. The only difference I could find is that the server on the sample is targeting the framework 2.0, but I assume that is wrong and it is building it in at least 3.0 or the System.ServiceModel reference would break. Is there something that has to be registered that the sample is doing behind the scenes or is the sample a special project type? I am confused. My next step is to copy all my classes and logic from my app to the sample app, but that is likely a lot of work. Here is my Client App.config: <client><endpoint name="thldmEndPoint" address="net.p2p://thldmMesh/thldmServer" binding="netPeerTcpBinding" bindingConfiguration="PeerTcpConfig" contract="THLDM_Client.IGameService"></endpoint></client> <bindings><netPeerTcpBinding> <binding name="PeerTcpConfig" port="0"> <security mode="None"></security> <resolver mode="Custom"> <custom address="net.tcp://localhost/thldmServer" binding="netTcpBinding" bindingConfiguration="TcpConfig"></custom> </resolver> </binding></netPeerTcpBinding> <netTcpBinding> <binding name="TcpConfig"> <security mode="None"></security> </binding> </netTcpBinding> </bindings> Here is my server App.config: <services> <service name="System.ServiceModel.PeerResolvers.CustomPeerResolverService"> <host> <baseAddresses> <add baseAddress="net.tcp://localhost/thldmServer"/> </baseAddresses> </host> <endpoint address="net.tcp://localhost/thldmServer" binding="netTcpBinding" bindingConfiguration="TcpConfig" contract="System.ServiceModel.PeerResolvers.IPeerResolverContract"> </endpoint> </service> </services> <bindings> <netTcpBinding> <binding name="TcpConfig"> <security mode="None"></security> </binding> </netTcpBinding> </bindings> Thanks ahead of time.

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >