Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 132/151 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • .NET: Problems creating email attachment from MemoryStream.

    - by David
    Hi all I'm using the following method to create an attachment from a MemoryStream: public void AddAttachment(Stream stream, string filename, string mimeType) { byte[] buffer = ((MemoryStream) stream).GetBuffer(); Attachment attachment = new Attachment(stream, filename, mimeType); _mail.Attachments.Add(attachment); } Note that the first line is not necessary isn't necessary for the attachment functionality, it's just useful to have the byte[] handy during debugging so that I can see how big it is. (It generally has around 80,000 elements.) The code runs fine and the email is sent. When Outlook receives the email, in the Inbox it displays the attachment symbol, but when you go into the email the attachment isn't there. Unfortunately I don't have access to the mail server to find out more about the email, e.g. what the attachment looks like, its size etc. Can anyone suggest what properties of the MemoryStream argument might tell me if it is in some way invalid for attachment? Or think of anything else I might try? Thank you. David

    Read the article

  • Do overlays work correctly in Emacs for Windows?

    - by Cheeso
    I'm using Flymake on C# code, emacs v22.2.1 on Windows. The Flymake stuff has been working well for me. For those who don't know, you can read an overview of flymake, but the quick story is that flymake repeatedly builds the source file you are currently working on in the background, for the purpose of doing syntax checking. It then highlights the compiler warnings and erros in the current buffer. Flymake didn't work for C# initially, but I "monkey-patched it" and it works nicely now. If you edit C# in emacs, I highly recommend using flymake. The only problem I have is with the UI. Flymake highlights the errors and warnings nicely, and then inserts "overlays" with the full error or warning text. IF I hover the mouse pointer over the highlighted line in code, the overlay pops up. But as you can see, the overlay is truncated, and it doesn't display correctly. Flymake seems to be doing the right thing, it's the overlay part that seems broken. Do overlays work correctly in emacs for Windows? Where do I look to fix this?

    Read the article

  • chrome extension login security with iframe

    - by Weaver
    I should note, I'm not a chrome extension expert. However, I'm looking for some advice or high level solution to a security concern I have with my chrome extension. I've searched quite a bit but can't seem to find a concrete answer. The situation I have a chrome extension that needs to have the user login to our backend server. However, it was decided for design reasons that the default chrome popup balloon was undesirable. Thus I've used a modal dialog and jquery to make a styled popup that is injected with content scripts. Hence, the popup is injected into the DOM o the page you are visiting. The Problem Everything works, however now that I need to implement login functionality I've noticed a vulnerability: If the site we've injected our popup into knows the password fields ID they could run a script to continuously monitor the password and username field and store that data. Call me paranoid, but I see it as a risk. In fact,I wrote a mockup attack site that can correctly pull the user and password when entered into the given fields. My devised solution I took a look at some other chrome extensions, like Buffer, and noticed what they do is load their popup from their website and, instead, embed an iFrame which contains the popup in it. The popup would interact with the server inside the iframe. My understanding is iframes are subject to same-origin scripting policies as other websites, but I may be mistaken. As such, would doing the same thing be secure? TLDR To simplify, if I embedded an https login form from our server into a given DOM, via a chrome extension, are there security concerns to password sniffing? If this is not the best way to deal with chrome extension logins, do you have suggestions with what is? Perhaps there is a way to declare text fields that javascript can simply not interact with? Not too sure! Thank you so much for your time! I will happily clarify anything required.

    Read the article

  • How to File Transfer client to Server?

    - by Phsika
    i try to recieve a file from server but give me error on server.Start() ERROR : In a manner not permitted by the access permissions to access a socket was attempted to How can i solve it? private void btn_Recieve_Click(object sender, EventArgs e) { TcpListener server = null; // Set the TcpListener on port 13000. Int32 port = 13000; IPAddress localAddr = IPAddress.Parse("192.168.1.201"); // TcpListener server = new TcpListener(port); server = new TcpListener(localAddr, port); // Start listening for client requests. server.Start(); // Buffer for reading data Byte[] bytes = new Byte[277577]; String data; data = null; // Perform a blocking call to accept requests. // You could also user server.AcceptSocket() here. TcpClient client = server.AcceptTcpClient(); NetworkStream stream = client.GetStream(); int i; i = stream.Read(bytes, 0, 277577); BinaryWriter writer = new BinaryWriter(File.Open("GoodLuckToMe.jpg", FileMode.Create)); writer.Write(bytes); writer.Close(); client.Close(); }

    Read the article

  • How do I repass a function pointer in C++

    - by fneep
    Firstly, I am very new to function pointers and their horrible syntax so play nice. I am writing a method to filter all pixels in my bitmap based on a function that I pass in. I have written the method to dereference it and call it in the pixel buffer but I also need a wrapper method in my bitmap class that takes the function pointer and passes it on. How do I do it? What is the syntax? I'm a little stumped. Here is my code with all the irrelevant bits stripped out and files combined (read all variables initialized filled etc.). struct sColour { unsigned char r, g, b, a; }; class cPixelBuffer { private: sColour* _pixels; int _width; int _height; int _buffersize; public: void FilterAll(sColour (*FilterFunc)(sColour)); }; void cPixelBuffer::FilterAll(sColour (*FilterFunc)(sColour)) { // fast fast fast hacky FAST for (int i = 0; i < _buffersize; i++) { _pixels[i] = (*FilterFunc)(_pixels[i]); } } class cBitmap { private: cPixelBuffer* _pixels; public: inline void cBitmap::Filter(sColour (*FilterFunc)(sColour)) { //HERE!! } };

    Read the article

  • What is the most efficient way to display decoded video frames in Qt?

    - by Jason
    What is the fastest way to display images to a Qt widget? I have decoded the video using libavformat and libavcodec, so I already have raw RGB or YCbCr 4:2:0 frames. I am currently using a QGraphicsView with a QGraphicsScene object containing a QGraphicsPixmapItem. I am currently getting the frame data into a QPixmap by using the QImage constructor from a memory buffer and converting it to QPixmap using QPixmap::fromImage(). I like the results of this and it seems relatively fast, but I can't help but think that there must be a more efficient way. I've also heard that the QImage to QPixmap conversion is expensive. I have implemented a solution that uses an SDL overlay on a widget, but I'd like to stay with just Qt since I am able to easily capture clicks and other user interaction with the video display using the QGraphicsView. I am doing any required video scaling or colorspace conversions with libswscale so I would just like to know if anyone has a more efficient way to display the image data after all processing has been performed. Thanks.

    Read the article

  • Are Parameters really enough to prevent Sql injections?

    - by Rune Grimstad
    I've been preaching both to my colleagues and here on SO about the goodness of using parameters in SQL queries, especially in .NET applications. I've even gone so far as to promise them as giving immunity against SQL injection attacks. But I'm starting to wonder if this really is true. Are there any known SQL injection attacks that will be successfull against a parameterized query? Can you for example send a string that causes a buffer overflow on the server? There are of course other considerations to make to ensure that a web application is safe (like sanitizing user input and all that stuff) but now I am thinking of SQL injections. I'm especially interested in attacks against MsSQL 2005 and 2008 since they are my primary databases, but all databases are interesting. Edit: To clarify what I mean by parameters and parameterized queries. By using parameters I mean using "variables" instead of building the sql query in a string. So instead of doing this: SELECT * FROM Table WHERE Name = 'a name' We do this: SELECT * FROM Table WHERE Name = @Name and then set the value of the @Name parameter on the query / command object.

    Read the article

  • Python3 and ftplib uploading files

    - by Teifion
    My python2 script uploads files nicely using this method but python3 is presenting problems and I'm stuck as to where to go next (googling hasn't helped). from ftplib import FTP ftp = FTP(ftp_host, ftp_user, ftp_pass) ftp.storbinary('STOR myfile.txt', open('myfile.txt')) The error I get is Traceback (most recent call last): File "/Library/WebServer/CGI-Executables/rob3/functions/cli_f.py", line 12, in upload ftp.storlines('STOR myfile.txt', open('myfile.txt')) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 454, in storbinary conn.sendall(buf) TypeError: must be bytes or buffer, not str I tried altering the code to from ftplib import FTP ftp = FTP(ftp_host, ftp_user, ftp_pass) ftp.storbinary('STOR myfile.txt'.encode('utf-8'), open('myfile.txt')) But instead I got this Traceback (most recent call last): File "/Library/WebServer/CGI-Executables/rob3/functions/cli_f.py", line 12, in upload ftp.storbinary('STOR myfile.txt'.encode('utf-8'), open('myfile.txt')) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 450, in storbinary conn = self.transfercmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 358, in transfercmd return self.ntransfercmd(cmd, rest)[0] File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 329, in ntransfercmd resp = self.sendcmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 244, in sendcmd self.putcmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 179, in putcmd self.putline(line) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 172, in putline line = line + CRLF TypeError: can't concat bytes to str Can anybody point me in the right direction

    Read the article

  • VS2010 express beta2 - no add reference dialog, no open file/project dialogs

    - by David
    Just installed VS2010 express for Windows Phone last night. Install went smoothly. It creates a project, compiles, and deploys the app to the emulator. Here's the problem: When I try to "Add Reference" through the Project menu, I do not get the Add Reference dialog box. Same thing if I right click References in the solution explorer and click Add Reference. That's not all. "File...Open" and "File...Open Project" also fail to throw up an open file dialog box. When attempting any of these actions, the IDE quickly loses and regains focus. Even pressing a keyboard shortcut (Ctrl+O) causes the IDE to quickly lose and regain focus, but no open file dialog box appears. This is what I have tried, not particularly in this order: 1. Turned off UAC 2. Monitored file and registry access using Process Monitor during a File...Open operation. File activity showed mostly "SUCCESS" with a few "FAST IO DISALLOWED" and a few "INVALID DEVICE REQUEST" results. Registry activity showed mostly "SUCCESS" with some "NAME NOT FOUND" and a few "BUFFER OVERFLOW" results. 3. Created a new, clean Windows account to run the IDE from 4. Forced a test project to add a reference to "System.Xml.Linq" by editing the ".csproj" project file. Project failed to load in the IDE. I don't have these problems at all on 2 other Windows 7 computers with VS2010 C# express beta 2 installed. One machine is 32bit and the other 64bit, both Home Premium edition. My system: Windows 7 Home Premium, 64bit Other Visual Studio products installed: VS2008 C# express, VS2008 C++ express One other thing to note: Several months ago I installed the non-phone distribution of VS2010 C# express beta 2, and I had the same exact problems. Back then I chalked it up to being beta and went back to VS2008 C# express, where I do not have these issues.

    Read the article

  • Any HTTP proxies with explicit, configurable support for request/response buffering and delayed conn

    - by Carlos Carrasco
    When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following: A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part. The proxy server stops buffering the request when: A size limit has been reached (say, 4KB), or The request has been received completely, headers and body Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed. The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.) Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed. The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources. I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model? (Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)

    Read the article

  • Form not updating usercontrol

    - by user328259
    From the post "Growing user control not updating"... Using C#, .Net 2.0 in a Windows environment. UserControl1 - draws cells to a bitmap buffer dependent upon NumberOfCells property UserControl2 - panel contains UserControl1 which displays vertical scroll when necessary; also contains NumberOfCells which sets UserControl1's NumberOfCells. Formf1 - contains NumericUpDown controls (just increments) which updates the UserControl2 - suppose to! When I increment the control on the form by say 20, UserControl1 adds the necessary cells, UserControl2 displays the vertical scroll bar accordingly, BUT the form does not 'redraw' to the updated/correct image!! Meaning, after I increment by 20, cells are added, vertical scrool bar added... but the image shown is just everything else expanding. I reset the control to scoll to the very TOP and the scrolling works, but the image is still staic... UNTIL I resize my form, more specifically, when I change it from maximize to window or vice versa!!! What can I do to 'reset/redraw' the correct image???? Thank you in advance. Lawrence

    Read the article

  • Deleting a non-owned dynamic array through a pointer

    - by ayanzo
    Hello all, I'm relatively novice when it comes to C++ as I was weened on Java for much of my undergraduate curriculum (tis a shame). Memory management has been a hassle, but I've purchased a number books on ansi C and C++. I've poked around the related questions, but couldn't find one that matched this particular criteria. Maybe it's so obvious nobody mentions it? This question has been bugging me, but I feel as if there's a conceptual point i'm not utilizing. Suppose: char original[56]; cstr[0] = 'a'; cstr[1] = 'b'; cstr[2] = 'c'; cstr[3] = 'd'; cstr[4] = 'e'; cstr[5] = '\0'; char *shaved = shavecstr(cstr); delete[] cstrn; where char* shavecstr(char* cstr) { size_t len = strlen(cstr); char* ncstr = new char[len]; strcpy(ncstr,cstr); return ncstr; } In that the whole point is to have 'original' be a buffer that fills with characters and routinely has its copy shaved and used elsewhere. To prevent leaks, I want to free up the memory held by 'shaved' to be used again after it passes through some arguments. There is probably a good reason for why this is restricted, but there should be some way to free the memory as by this configuration, there is no way to access the original owner (pointer) of the data.

    Read the article

  • Which articles I've should read before starting to make my custom drawn winforms app?

    - by Dmitriy Matveev
    Hello! I'm currently developing a windows forms application with a lot of user controls. Some of them are just custom drawn buttons or panels and some of them are a compositions of these buttons and panels inside of FlowLayoutPanels and TableLayoutPanels. And the window itself is also custom drawn. I don't have much experience in winforms development, but I've made a proper decomposition of proposed design into user controls and implementation is already almost finished. I've already solved many arisen problems during development by the help of the google, msdn, SO and several dirty hacks (when nothing were helping) and still experiencing some of them. There are a lot of gaps in my knowledge base, since I don't know answers to many questions like: When I should use things like double buffer, suspended layout, suspended redraw ? What should I do with the controls which shouldn't be visible at some moment ? Common performance pitfalls (I think I've fallen in in several ones) ? So I think there should be some great articles which can give some knowledge enough to avoid most common problems and improve performance and maintainability of my application. Maybe some of you can recommend a few?

    Read the article

  • C++ boost.asio server and client connection undersanding

    - by Edgar Buchvalov
    i started learning boost.asio and i have some problems with undersanding tcp connections. There is example from official boost site: #include <ctime> #include <iostream> #include <string> #include <boost/asio.hpp> using boost::asio::ip::tcp; std::string make_daytime_string() { using namespace std; // For time_t, time and ctime; time_t now = time(0); return ctime(&now); } int main() { try { boost::asio::io_service io_service; tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), 13)); for (;;) { tcp::socket socket(io_service); acceptor.accept(socket); std::string message = make_daytime_string(); boost::system::error_code ignored_error; boost::asio::write(socket, boost::asio::buffer(message), boost::asio::transfer_all(), ignored_error); } } catch (std::exception& e) { std::cerr << e.what() << std::endl; } return 0; } there is question, why if i want to connet to this server via client i have t write: boost::asio::io_service io_service; tcp::resolver resolver(io_service); tcp::resolver::query query(host_ip, "daytime"); //why daytime? tcp::resolver::iterator endpoint_iterator = resolver.resolve(query); tcp::resolver::iterator end; why daytime?, what it meant and where it is inicialized in server, or i just doesn't missed somefing? there is full client code : www.boost.org/doc/libs/1_39_0/doc/html/boost_asio/tutorial/tutdaytime1.html thanks for explanation in advance

    Read the article

  • Misalignement in the output Bitmap created from a byte array

    - by Daniel
    I am trying to understand why I have troubles creating a Bitmap from a byte array. I post this after a careful scrutiny of the existing posts about Bitmap creation from byte arrays, like the followings: Creating a bitmap from a byte[], Working with Image and Bitmap in c#?, C#: Bitmap Creation using bytes array My code is aimed to execute a filter on a digital image 8bppIndexed writing the pixel value on a byte [] buffer to be converted again (after some processing to manage gray levels) in a 8BppIndexed Bitmap My input image is a trivial image created by means of specific perl code: https://www.box.com/shared/zqt46c4pcvmxhc92i7ct Of course, after executing the filter the output image has lost the first and last rows and the first and last columns, due to the way the filter manage borders, so from the original 256 x 256 image i get a 254 x 254 image. Just to stay focused on the issue I have commented the code responsible for executing the filter so that the operation really performed is an obvious: ComputedPixel = InputImage.GetPixel(myColumn, myRow).R; I know, i should use lock and unlock but I prefer one headache one by one. Anyway this code should be a sort of identity transform, and at last i use: private unsafe void FillOutputImage() { OutputImage = new Bitmap (OutputImageCols, OutputImageRows , PixelFormat .Format8bppIndexed); ColorPalette ncp = OutputImage.Palette; for (int i = 0; i < 256; i++) ncp.Entries[i] = Color .FromArgb(255, i, i, i); OutputImage.Palette = ncp; Rectangle area = new Rectangle(0, 0, OutputImageCols, OutputImageRows); var data = OutputImage.LockBits(area, ImageLockMode.WriteOnly, OutputImage.PixelFormat); Marshal .Copy (byteBuffer, 0, data.Scan0, byteBuffer.Length); OutputImage.UnlockBits(data); } The output image I get is the following: https://www.box.com/shared/p6tubyi6dsf7cyregg9e It is quite clear that I am losing a pixel per row, but i cannot understand why: I have carefully controlled all the parameters: OutputImageCols, OutputImageRows and the byte [] byteBuffer length and content even writing known values as way to test. The code is nearly identical to other code posted in stackOverflow and elsewhere. Someone maybe could help to identify where the problem is? Thanks a lot

    Read the article

  • Android: Streaming audio over TCP Sockets

    - by user299988
    Hi, For my app, I need to record audio from MIC on an Android phone, and send it over TCP to the other android phone, where it needs to be played. I am using AudioRecord and AudioTrack class. This works great with a file - write audio to the file using DataOutputStream, and read from it using DataInputStream. However, if I obtain the same stream from a socket instead of a File, and try writing to it, I get an exception. I am at a loss to understand what could possibly be going wrong. Any help would be greatly appreciated. EDIT: The problem is same even if I try with larger buffer sizes (65535 bytes, 160000 bytes). This is the code: Recorder: int bufferSize = AudioRecord.getMinBufferSize(11025, , AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT); AudioRecord recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize); byte[] tempBuffer = new byte[bufferSize]; recordInstance.startRecording(); while (/*isRecording*/) { bufferRead = recordInstance.read(tempBuffer, 0, bufferSize); dataOutputStreamInstance.write(tempBuffer); } The DataOutputStream above is obtained as: BufferedOutputStream buff = new BufferedOutputStream(out1); //out1 is the socket's outputStream DataOutputStream dataOutputStreamInstance = new DataOutputStream (buff); Could you please have a look, and let me know what is it that I could be doing wrong here? Thanks,

    Read the article

  • How do I read binary C++ protobuf data using Python protobuf?

    - by nbolton
    The Python version of Google protobuf gives us only: SerializeAsString() Where as the C++ version gives us both: SerializeToArray(...) SerializeAsString() We're writing to our C++ file in binary format, and we'd like to keep it this way. That said, is there a way of reading the binary data into Python and parsing it as if it were a string? Is this the correct way of doing it? binary = get_binary_data() binary_size = get_binary_size() string = None for i in range(len(binary_size)): string += i message = new MyMessage() message.ParseFromString(string) Update: Here's a new example, and a problem: message_length = 512 file = open('foobars.bin', 'rb') eof = False while not eof: data = file.read(message_length) eof = not data if not eof: foo_bar = FooBar() foo_bar.ParseFromString(data) When we get to the foo_bar.ParseFromString(data) line, I get this error: Exception Type: DecodeError Exception Value: Too many bytes when decoding varint. Update 2: It turns out, that the padding on the binary data was throwing protobuf off; too many bytes were being sent in, as the message suggests (in this case it was referring to the padding). This padding comes from using the C++ protobuf function, SerializeToArray on a fixed-length buffer. To eliminate this, I have used this temproary code: message_length = 512 file = open('foobars.bin', 'rb') eof = False while not eof: data = file.read(message_length) eof = not data string = '' for i in range(0, len(data)): byte = data[i] if byte != '\xcc': # yuck! string += data[i] if not eof: foo_bar = FooBar() foo_bar.ParseFromString(string) There is a design flaw here I think. I will re-implement my C++ code so that it writes variable length arrays to the binary file. As advised by the protobuf documentation, I will prefix each message with it's binary size so that I know how much to read when I'm opening the file with Python.

    Read the article

  • Is there a way to make changes to toggles in my .emacs file apply without re-starting Emacs?

    - by Vivi
    I want to be able to make the changes to my .emacs file without having to reload Emacs. I found three questions which sort of answer what I am asking (you can find them here, here and here), but the problem is that the change I have just made is to a toggle, and as the comments to two of the answers (a1, a2) to those questions explain, the solutions given there (such as M-x reload-file or M-x eval-buffer) don't apply to toggles. I imagine there is a way of toggling the variable again with a command, but if there is a way to reload the whole .emacs and have the all the toggles re-evaluated without having to specify them, I would prefer. In any case, I would also appreciate if someone told me how to toggle the value of a variable so that if I just changed one toggle I can do it with a command rather than re-start Emacs just for that (I am new to Emacs). I don't know how useful this information is, but the change I applied was the following (which I got from this answer to another question): (setq skeleton-pair t) (setq skeleton-pair-on-word t) (global-set-key (kbd "[") 'skeleton-pair-insert-maybe) (global-set-key (kbd "(") 'skeleton-pair-insert-maybe) (global-set-key (kbd "{") 'skeleton-pair-insert-maybe) (global-set-key (kbd "<") 'skeleton-pair-insert-maybe) Edit: I included the above in .emacs and reloaded Emacs, so that the changes took effect. Then I commented all of it out and tried M-x load-file. This doesn't work. The suggestion below (C-x C-e by PP works if I am using it to evaluate the toggle first time, but not when I want to undo it). I would like something that would evaluate the commenting out, if such thing exists... Thanks :)

    Read the article

  • Can any linux API or tool watch for any change in any folder below e.g. /SharedRoot or do I have to

    - by Simon B.
    I have a folder with ~10 000 subfolders. Can any linux API or tool watch for any change in any folder below e.g. /SharedRoot or do I have to setup inotify for each folder? (i.e. I loose if I want to do this for 10k+ folders). I guess yes, since I've already seen examples of this inefficient method, for instance http://twistedmatrix.com/trac/browser/trunk/twisted/internet/inotify.py?rev=28866#L345 My problem: I need to keep folders time-sorted with most recently active "project" up top. When a file changes, each folder above that file should update its last-modified timestamp to match the file. Delays are ok. Opening a file (typically MS Excel) and closing again, its file date can jump up and then down again. For this reason I need to wait until after a file is closed, then queue the folder of that file for checking, and only a while later do I go and look for the newest file in its folder, since the filedate of the triggering file could already be back-dated to its original timestamp by Excel or similar programs. Also in case several files from same folder are used/created, it makes sense to buffer timestamping of that folders' parents to at least get a bunch of updates collapsed into one delayed update. I'm looking for a linux solution. I have some code that can be run on a windows server, most of the queing functionality is here: http://github.com/sesam/FolderdateFollowsFiles/blob/master/FolderdateFollowsFiles/Follower.vb Available API:s The relative of inotify on windows, ReadDirectoryChangesW, can watch a folder and its whole subtree; see bWatchSubtree on http://msdn.microsoft.com/en-us/library/aa365465(VS.85).aspx Samba? Patching samba source is a possibility, but perhaps there are already hooks available? Other possibilities, like client side (various windows versions) and spying on file activities in order to update folders recursively?

    Read the article

  • How to get user input before saving a file in Sublime Text

    - by EddieJessup
    I'm making a plugin in Sublime Text that prompts the user for a password to encrypt a file before it's saved. There's a hook in the API that's executed before a save is executed, so my naïve implementation is: class TranscryptEventListener(sublime_plugin.EventListener): def on_pre_save(self, view): # If document is set to encode on save if view.settings().get('ON_SAVE'): self.view = view # Prompt user for password message = "Create a Password:" view.window().show_input_panel(message, "", self.on_done, None, None) def on_done(self, password): self.view.run_command("encode", {password": password}) The problem with this is, by the time the input panel appears for the user to enter their password, the document has already been saved (despite the trigger being 'on_pre_save'). Then once the user hits enter, the document is encrypted fine, but the situation is that there's a saved plaintext file, and a modified buffer filled with the encrypted text. So I need to make Sublime Text wait until the user's input the password before carrying out the save. Is there a way to do this? At the moment I'm just manually re-saving once the encryption has been done: def on_pre_save(self, view, encode=False): if view.settings().get('ON_SAVE') and not view.settings().get('ENCODED'): self.view = view message = "Create a Password:" view.window().show_input_panel(message, "", self.on_done, None, None) def on_done(self, password): self.view.run_command("encode", {password": password}) self.view.settings().set('ENCODED', True) self.view.run_command('save') self.view.settings().set('ENCODED', False) but this is messy and if the user cancels the encryption then the plaintext file gets saved, which isn't ideal. Any thoughts? Edit: I think I could do it cleanly by overriding the default save command. I hoped to do this by using the on_text_command or on_window_command triggers, but it seems that the save command doesn't trigger either of these (maybe it's an application command? But there's no on_application_command). Is there just no way to override the save function?

    Read the article

  • RSA_sign and RSACryptoProvider.VerifySignature

    - by Miky D
    I'm trying to get up to speed on how to get some code that uses OpenSSL for cryptography, to play nice with another program that I'm writing in C#, using the Microsoft cryptography providers available in .NET. More to the point, I'm trying to have the C# program verify an RSA message signature generated by the OpenSSL code. The code that generates the signature looks something like this: // Code in C, using the OpenSSL RSA implementation char msgToSign[] = "Hello World"; // the message to be signed char signature[RSA_size(rsa)]; // buffer that will hold signature int slen = 0; // will contain signature size // rsa is an OpenSSL RSA context, that's loaded with the public/private key pair memset(signature, 0, sizeof(signature)); RSA_sign(NID_sha1 , (unsigned char*)msgToSign , strlen(msgToSign) , signature , &slen , rsa); // now signature contains the message signature // and can be verified using the RSA_verify counterpart // .. I would like to verify the signature in C# In C#, I would do the following: import the other side's public key into an RSACryptoServiceProvider object receive the message and it's signature try to verify the signature I've got the first two parts working (I've verified that the public key is loading properly because I managed to send an RSA encrypted text from the C# code to the OpenSSL code in C and successfully have it decrypted) In order to verify the signature in C#, I've tried using the: VerifySignature method of the RSACryptoServiceProvider but that didn't work. And digging around the internet I was only able to find some vague information pointing out that .NET uses a different method for generating the signature than OpenSSL does. So, does anybody know how to accomplish this?

    Read the article

  • Optimizing quality for available bandwidth in Flash/RTMFP

    - by Artem M.
    I'm developing a simple one-on-one P2P video chat using ActionScript, and I'd like to ensure the best video quality for the peers given their bandwidth. This means: Setting the best quality given the available bandwidth when the chat starts Responding to network congestions during chat by decreasing the quality. The task is similar to dynamic stream switching, but P2P has its specifics that make dynamic streaming approaches not work. For example, the maxBytesPerSecond metric monitored in dynamic stream switching is pretty useless in P2P where the receiving NetStream's buffer size is set to 0 to minimize latency. So far, it looks like the most reliable QoS metric for P2P is SRTT. In my simulated tests on a local network, a bandwidth congestion makes it shot up to 500 ms and more when there's a bandwidth limit introduced. However, it gives no hint as to how best adjust the value for bandwidth in Camera.setQuality(0, bandwidth) to respond to the congestion. I've done lots of experiments, and I still don't see a clear and simple solution to the problem. I'm also wondering how this issue is addressed (if at all) in other RTMFP chat solutions.

    Read the article

  • FILE* issue PPU side code

    - by Cristina
    We are working on a homework on CELL programming for college and their feedback response to our questions is kinda slow, thought i can get some faster answers here. I have a PPU side code which tries to open a file passed down through char* argv[], however this doesn't work it cannot make the assignment of the pointer, i get a NULL. Now my first idea was that the file isn't in the correct directory and i copied in every possible and logical place, my second idea is that maybe the PPU wants this pointer in its LS area, but i can't deduce if that's the bug or not. So... My question is what am i doing wrong? I am working with a Fedora 7 SDK Cell, with Eclipse as an IDE. Maybe my argument setup is wrong tho he gets the name of the file correctly. Code on request: images_t *read_bin_data(char *name) { FILE *file; images_t *img; uint32_t *buffer; uint8_t buf; unsigned long fileLen; unsigned long i; //Open file file = (FILE*)malloc(sizeof(FILE)); file = fopen(name, "rb"); printf("[Debug]Opening file %s\n",name); if (!file) { fprintf(stderr, "Unable to open file %s", name); return NULL; } //....... } Main launch: int main(int argc,char* argv[]) { int i,img_width; int modif_this[4] __attribute__ ((aligned(16))) = {1,2,3,4}; images_t *faces, *nonfaces; spe_context_ptr_t ctxs[SPU_THREADS]; pthread_t threads[SPU_THREADS]; thread_arg_t arg[SPU_THREADS]; //intializare img_width img_width = atoi(argv[1]); printf("[Debug]Img size is %i\n",img_width); faces = read_bin_data(argv[3]); //....... } Thanks for the help.

    Read the article

  • Serial: write() throttling?

    - by damian
    Hi everyone, I'm working on a project sending serial data to control animation of LED lights, which need to stay in sync with a sound engine. There seems to be a large serial write buffer (OSX (POSIX) + FTDI chipset usb serial device), so without manually restricting the transmission rate, the animation system can get several seconds ahead of the serial transmission. Currently I'm manually restricting the serial write speed to the baudrate (8N1 = 10 bytes serial frame per 8 bytes data, 19200 bps serial - 1920 bytes per second max), but I am having a problem with the sound drifting out of sync over time - it starts fine, but after 10 minutes there's a noticeable (100ms+) lag between the sound and the lights. This is the code that's restricting the serial write speed (called once per animation frame, 'elapsed' is the duration of the current frame, 'baudrate' is the bps (19200)): void BufferedSerial::update( float elapsed ) { baud_timer += elapsed; if ( bytes_written > 1024 ) { // maintain baudrate float time_should_have_taken = (float(bytes_written)*10)/float(baudrate); float time_actually_took = baud_timer; // sleep if we have > 20ms lag between serial transmit and our write calls if ( time_should_have_taken-time_actually_took > 0.02f ) { float sleep_time = time_should_have_taken - time_actually_took; int sleep_time_us = sleep_time*1000.0f*1000.0f; //printf("BufferedSerial::update sleeping %i ms\n", sleep_time_us/1000 ); delayUs( sleep_time_us ); // subtract 128 bytes bytes_written -= 128; // subtract the time it should have taken to write 128 bytes baud_timer -= (float(128)*10)/float(baudrate); } } } Clearly there's something wrong, somewhere. A much better approach would be to be able to determine the number of bytes currently in the transmit queue, and try and keep that below a fixed threshold. Any advice appreciated.

    Read the article

  • Get_user running at kernel mode returns error

    - by Fangkai Yang
    Hi, all, I have a problem with get_user() macro. What I did is as follows: I run the following program int main() { int a = 20; printf("address of a: %p", &a); sleep(200); return 0; } When the program runs, it outputs the address of a, say, 0xbff91914. Then I pass this address to a module running in Kernel Mode that retrieves the contents at this address (at the time when I did this, I also made sure the process didn't terminate, because I put it to sleep for 200 seconds... ): The address is firstly sent as a string, and I cast them into pointer type. int * ptr = (int*)simple_strtol(buffer, NULL,16); printk("address: %p",ptr); // I use this line to make sure the cast is correct. When running, it outputs bff91914, as expected. int val = 0; int res; res= get_user(val, (int*) ptr); However, res is always not 0, meaning that get_user returns error. I am wondering what is the problem.... Thank you!! -- Fangkai

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >