Search Results

Search found 59543 results on 2382 pages for 'solution files'.

Page 74/2382 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Sending files through a webservice

    - by Jay
    Hi, I have to send some files through a webservice in C#. The files to be sent can be from different locations i.e. there is one folder having 4 files and another folder having 5 files. Assuming i have a mechanism to select which files to send. What would be the best way to send those files? Should I be sending them one by one and let the client figure out how to put them together, or zip all the files into a single file and send that zip file to the client. If there is any other way to implement this, I would be more than happy to look into that approach too. Thanks

    Read the article

  • Configure nHibernate for multiple-project solution

    - by NoOne
    Hello, Im doing a project with C# winforms. This project is composed by: Client project: Windows Forms where user will do CRUD operations on the models; Server project; Common Project: This project will hold the models (in the image only have the model Item); ListSingleton: Remote Object that will do the operations on the models; I already have all the communication working, but now I need to work on the persistence of the data in a mysql database. I was trying to use nHibernate but I’m having some troubles. My main problem is how to organize my hibernate configuration. In which project do I keep the mapping? Common project? In which project do I keep the hibernate configuration file (App.config)? ListSingleton project? In which project do I do this: Configuration cfg = new Configuration(); cfg.AddXmlFile("Item.hbm.xml"); ISessionFactory factory = cfg.BuildSessionFactory(); ISession session = factory.OpenSession(); ITransaction transaction = session.BeginTransaction(); Item newItem = new Item("BLAA"); // Tell NHibernate that this object should be saved session.Save(newItem); // commit all of the changes to the DB and close the ISession transaction.Commit(); session.Close(); In the ListSingleton project? Altho I had reference to the Common Project in the ListSingleton I keep getting error in the addXml line… My mapping is correct because I tried with a one-project solution and it worked :X

    Read the article

  • Is it possible to store only a checksum of a large file in git?

    - by Andrew Grimm
    I'm a bioinformatician currently extracting normal-sized sequences from genomic files. Some genomic files are large enough that I don't want to put them into the main git repository, whereas I'm putting the extracted sequences into git. Is it possible to tell git "Here's a large file - don't store the whole file, just take its checksum, and let me know if that file is missing or modified." If that's not possible, I guess I'll have to either git-ignore the large files, or, as suggested in this question, store them in a submodule.

    Read the article

  • online backup solution with api for desktop

    - by user161179
    I made a small backup application that simply creates an archive out specified files and folders. Now I need an online service to backup that online. Which service can i use that can be integrated into my app ? Options I considered: dropbox is ideal, but they have all but abandoned the desktop. skydrive has no api. I couldn't find any free reliable backup service that uses ftp . anything else ? it should provide 1-2 gb of free space and be reasonably reliable. Thanks My app is in C#, but can be ported to any other language as well..

    Read the article

  • Is SharePoint a good solution for me?

    - by Pam Bullock
    My company has many branches that use the same software suite that we've written for them. We're looking at SharePoint as a way to open a dialog with them about the software - reviews, change requests (not official ones, just for us to get an idea and for them to discuss amongst themselves what would be helpful). We would also like to utilize the document repository feature and possibly the blog. SharePoint is already available to us if we'd like to use it so that's why we're looking into it. I've done a lot of research and watched a lot of starter tutorials. It seems like it has what we're looking for. For those of you that know it well: Do you think it would be a good solution for us? Do you think it would be overkill? If so, Do you have an alternative suggestion? Are there other aspects of SharePoint that I haven't discovered yet that seems like it would be helpful for what we're doing? I will continue to research online but it's always great to hear the opinion of someone experienced with the product. Thanks so much! Pam

    Read the article

  • Elegant solution to retrieve custom date and time?

    - by kefs
    I am currently using a date and time picker to retrieve a user-submitted date and time, and then I set a control's text to the date and time selected. I am using the following code: new DatePickerDialog(newlog3.this, d, calDT.get(Calendar.YEAR), calDT.get(Calendar.MONTH), calDT.get(Calendar.DAY_OF_MONTH)).show(); new TimePickerDialog(newlog3.this, t, calDT.get(Calendar.HOUR_OF_DAY), calDT.get(Calendar.MINUTE), true).show(); optCustom.setText(fmtDT.format(calDT.getTime())); Now, while the above code block does bring up the date and time widgets and sets the text, the code block is being executed in full before the user can select the date.. ie: It brings up the date box first, then the time box over that, and then updates the text, all without any user interaction. I would like the date widget to wait to execute the time selector until the date selection is done, and i would like the settext to execute only after the time widget is done. How is this possible? Or is there is a more elegant solution that is escaping me? Edit: This is the code for DatePickerDialog/TimePickerDialog which is located within the class: DatePickerDialog.OnDateSetListener d=new DatePickerDialog.OnDateSetListener() { public void onDateSet(DatePicker view, int year, int monthOfYear, int dayOfMonth) { calDT.set(Calendar.YEAR, year); calDT.set(Calendar.MONTH, monthOfYear); calDT.set(Calendar.DAY_OF_MONTH, dayOfMonth); //updateLabel(); } }; TimePickerDialog.OnTimeSetListener t=new TimePickerDialog.OnTimeSetListener() { public void onTimeSet(TimePicker view, int hourOfDay, int minute) { calDT.set(Calendar.HOUR_OF_DAY, hourOfDay); calDT.set(Calendar.MINUTE, minute); //updateLabel(); } }; Thanks in advance

    Read the article

  • How to loop through all illustrator files in a folder (CS6)

    - by Julian
    I have written some JavaScript to save .ai files to two separate locations with different resolutions, one of them being cropped to a reduced size art board. (Courtesy of John Otterud / Articmill for the main part). There are other variables in the script that I am not using at present but I want to leave the functionality there for a later date/additional layers to export/other resolutions etc. I can't get it to loop through all files in a folder. I cannot find the script that works - or insert it at the right place. I can get as far a selecting the folder and I suppose creating an array but after that what next? This is the create array part of the script - // JavaScript Document //Set up vairaibles var destDoc, sourceDoc, sourceFolder, newLayer; // Select the source folder. sourceFolder = Folder.selectDialog('Select the folder with Illustrator files that you want to mere into one', '~'); destDoc = app.documents.add(); // If a valid folder is selected if (sourceFolder != null) { files = new Array(); // Get all files matching the pattern files = sourceFolder.getFiles(); I have inserted this at the beginning of the main script (probably where I am going wrong because I can select the folder but then nothing more) #target illustrator var docRef = app.activeDocument; with (docRef) { if (layers[i].name = 'HEADER') { layers[i].name = '#'+ activeDocument.name; save() } } // *** Export Layers as PNG files (in multiple resolutions) *** var subFolderName = "For_PLMA"; var subFolderTwoName = "For_VLP"; var saveInMultipleResolutions = true; // ... // Note: only use one character! var exportLayersStartingWith = "%"; var exportLayersWithArtboardClippingStartingWith = "#"; // ... var normalResolutionFileAppend = "_VLP"; var highResolutionFileAppend = "_PLMA"; // ... var normalResolutionScale = 100; var highResolutionScale = 200; var veryhighResolutionScale = 300; // *** Start of script *** var doc = app.activeDocument; // Make sure we have saved the document if (doc.path != "") { Then the rest of the export script runs on from there.

    Read the article

  • Data Integration Solution?

    - by Shlomo
    At my company we have a number of data feeds and processing that run on any given day. The number of feeds and processing steps is starting to out-number the ability to manage it ad-hoc as it is managed currently. Is there a good solution that helps with logging and managing/scheduling dependencies? For example: A: When file x is FTP dropped into directory D1, kick off processing step B B: Load flat file into DB1 C: When file y is FTP dropped into directory D2, kick off processing Step D D: Load flat file into DB11 E: When B and D are done, churn through the data, and load new data into DB111. F: When Step E is done, launch application process P G: etc... I want those steps to run at the appropriate times, not to mention if B fails, there's no reason to run steps E & F, but I could still run C & D. When I re-run B successfully, it should trigger just E & F to re-run, not C & D. We're a .NET/C#/Sql Server shop, and I'm already familiar with SSIS. Is that really the best there is? That manages steps well, but not external dependencies, or logging. Open source (.NET) preferred, but not required.

    Read the article

  • Optimal (Time paradigm) solution to check variable within boundary

    - by kumar_m_kiran
    Hi All, Sorry if the question is very naive. I will have to check the below condition in my code 0 < x < y i.e code similar to if(x > 0 && x < y) The basic problem at system level is - currently, for every call (Telecom domain terminology), my existing code is hit (many times). So performance is very very critical, Now, I need to add a check for boundary checking (at many location - but different boundary comparison at each location). At very normal level of coding, the above comparison would look very naive without any issue. However, when added over my statistics module (which is dipped many times), performance will go down. So I would like to know the best possible way to handle the above scenario (kind of optimal way for limits checking technique). Like for example, if bit comparison works better than normal comparison or can both the comparison be evaluation in shorter time span? Other Info x is unsigned integer (which must be checked to be greater than 0 and less than y). y is unsigned integer. y is a non-const and varies for every comparison. Here time is the constraint compared to space. Language - C++. Now, later if I need to change the attribute of y to a float/double, would there be another way to optimize the check (i.e will the suggested optimal technique for integer become non-optimal solution when y is changed to float/double). Thanks in advance for any input. PS : OS used is SUSE 10 64 bit x64_64, AIX 5.3 64 bit, HP UX 11.1 A 64.

    Read the article

  • reading files provided via $_GET

    - by Max
    I have a php script which takes a relative pathname via $_GET, reads that file and creates a thumbnail of it. I dont want the user to be able to read any file from the server. Only files from a certain directory should be allowed, otherwiese the script should exit(). Here is my folder structure: files/ <-- all files from this folder are public my_stuff/ <-- this is the folder of my script that reads the files My script is accessed via mydomain.com/my_stuff/script.php?pathname=files/some.jpg. What should not be allowed e. g.: mydomain.com/my_stuff/script.php?pathname=files/../db_login.php So, here is the relevant part of the script in my_stuff folder: ... $pathname = $_GET['pathname']; $pathname = realpath('../' . $_GET['pathname']); if(strpos($pathname, '/files/') === false) exit('Error'); ... I am not really sure about that approach, doesnt seem too safe for me. Anyone with a better idea?

    Read the article

  • Sending Big Files with WCF

    - by Sean Feldman
    I had to look into a project that submits large files to WCF service. Implementation is based on data chunking. This is a good approach when your client and server are not both based on WCF, bud different technologies. The problem with something like this is that chunking (either you wish it or not) complicates the overall solution. Alternative would be streaming. In WCF to WCF scenario, this is a piece of cake. When client is Java, it becomes a bit more challenging (has anyone implemented Java client streaming data to WCF service?). What I really liked about .NET implementation with WCF, is that sending header info along with stream was dead simple, and from the developer point of view looked like it’s all a part of the DTO passed into the service. [ServiceContract] public interface IFileUpload { [OperationContract] void UploadFile(SendFileMessage message); } Where SendFileMessage is [MessageContract] public class SendFileMessage { [MessageBodyMember(Order = 1)] public Stream FileData; [MessageHeader(MustUnderstand = true)] public FileTransferInfo FileTransferInfo; }

    Read the article

  • Be able to create edit and move files to and from /var/www

    - by Ryan Murphy
    I have installed lamp on Ubuntu 12.04. I try to access /var/www but it doesn't allow me to do anything as it says I don't have permissions. I have tried: 1. gksudo nautilus - which works but its very inconvenient way of doing things. 2. sudo adduser ryan www-data sudo chown -R www-data:www-data /var/www sudo chmod g+rw /var/www The above didn't work. I have googled and searched this site for a solution, but all the existing possible solutions have not worked.

    Read the article

  • Coda 2 and SCP uploading files with the wrong permission

    - by Tom Black
    Currently I have a basic Ubuntu server running a website. The website is for a few students learning HTML/PHP and each student has their own account with a symbolic link to the shared website folder. Since the students are working on the website together, each user needs to be able to modify all the files (index.html for example). So I created a Webdev group containing all of the students with the default umask of 0002 set in their .bashrc (This allows newly created files to be 774). The shared folder is owned by the group Webdev with a chmod g+s so that new files/folders also belong to the group Webdev. The problem is that the students are using an IDE (Coda 2) and when they create a new file or folder using the IDE the file has the permissions of 644 on the server (not group writable). However when I make a new file through connecting with Cyberduck (SFTP client) the file permissions are 664 (as they should be). So I don't understand why Coda would be any different. However, after some trial and error I believe that Coda is first creating the file on local disk and then uploading that file to the server. On a mac by default a newly created file is 644. When the client uploads a file that's already 644 it stays 644 on the server side (umask is kind of useless in this situation). I've also tried creating ACL permissions for that folder but an uploaded file from my mac via SCP doesn't get the default ACL permissions. In Coda there is an option to change file permissions on a transfer. However this option seems to apply a chmod to all files being uploaded or saved. When one of students is modifying a file created by someone else when they try to upload the file or save it Coda tries to also do a chmod but fails because that user isn't the owner of the file. My current solution is using bindfs... I mount the shared web folder and bindfs sets permissions and group ownership of newly created files. However, bindfs seems to be a bit slow and I'm sure there is a better solution. Even if the students ditched Coda 2 and used Mac vim with scp the newly created files on the server would behave the same (644) which is default on the mac. Other options... 1) Either I teach the students to use (ssh/chmod) with their IDE to change their own file permissions when uploading. 2) I make all the students' Macs have the default umask of 0002 which would upload files with the right permissions. 3) Write a corn script to fix the file permissions every 5 to 15 minutes... (This option I think is the worst if students are working together at the same time). Is there any way that I could make all files that are uploaded via SCP have the default file permissions of 664 even though the uploaded file has a lower permission? (After hours of searching I don't think this is possible) I guess a corn script is my best option for novice users. How do web developers work together on larger sites? similar to this: http://serverfault.com/questions/283492/how-to-specify-file-permission-when-putting-a-file-using-openssh-sftp-command Also similar: http://serverfault.com/questions/395418/managing-linux-directory-permissions-sftp

    Read the article

  • Linux: prevent VNC from swapping like mad

    - by Weezy
    I'm accessing a MacMini (with MacOS X 10.4) from my Linux machine using VNC and there's an issue that is driving me crazy... My Linux machine has 4 GB of ram and I run a lot of various apps on it and I've got no issue at all. It's all snappy and don't hear the hard disk swapping/read/writing too often. Now with VNC, the hard disk is swapping like mad... When I'm moving things on the OS X desktop. So I was thinking of creating a ramdisk and forcing the temp VNC files to go into that ramdisk but the problem is I can't find any temp files. I've attempted to do that: #!/bin/bash while [ true ] do lsof | grep vnc done And eyeball parse the output to try to find some temp file: no luck. The VNC version I'm using is this one: $ vncviewer -version VNC Viewer Free Edition 4.1.1 for X - built Jan 30 2009 19:33:16 Copyright (C) 2002-2005 RealVNC Ltd. No matter how much data is coming from the Mac, there should be plenty of memory (4 GB of ram) so there's really no reason to swap like crazy. This is driving me mad. Any help as to how I could solve this problem is most welcome because this is literally driving me nuts.

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • Inkscape won't open .cdr files

    - by Bora George
    Hello I'm running Kubuntu 12.04 and I've installed Inkscape because I need to edit a .cdr file, Inkscape works like normal but the sole task I need it to do is edit said .cdr file which I understand it should be capable off by default. Whenever I try to open the file though I get the following error: UniConvertor failed: The eror itself is longer but what it seems to be saying that it can't find this application, and I myself haven't found it in the package manager. If someone has experienced something similar or has a simple solution for editing cdr's I'm no artist I just need to change some names, I would be very gratefull. The full error.

    Read the article

  • logrotate deletes all maillogs older than one day

    - by shadyabhi
    I see only two files maillog and maillog.1 in /var/log. grepping for maillog in logrotate.d directory gives three files that have a mention of maillog. syslog /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron { #/var/log/messages /var/log/secure /var/log/spooler /var/log/boot.log /var/log/cron { daily sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } syslog-ng /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron /var/log/kern.log /var/log/kern { sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } and maillog. /var/log/maillog { daily compress # rotate 365 rotate 14 sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } I am new to logrotate so may be I am missing something obvious. What can be the issue? The setup was already done when I started managing the server so I don't also know as do why do I have 3 mentions for maillog in logrotate.

    Read the article

  • Log backups "stalling" on SQL 2008?

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • Copying first 1000 PDF files having single, double quotes in their name to another folder

    - by racer_ace
    I am having this folder with PDFs into it and I need to process 1000 at a time. So I need to move them into another folder, process them and delete them. For this I tried using $ find . -maxdepth 1 -type f |head -1000|xargs cp -t $destdir It gives error on single and double quotes in filename. There are thousands of files and I have no idea how many of them has these quotes in them. Can anyone help me find a solution? And I tried with the -0 option, it did not work

    Read the article

  • Search for odt files without indexing

    - by josinalvo
    I am looking for: a way to search inside odt files (i.e. search for contents, not name) that does not require any kind of indexing that is graphical and very user-friendly (for a relatively old person, who does not like computers much) I know that it is possible to have 1) and 2): for x in `find . -iname '*odt'`; do odt2txt $x | grep Query; done works well enough, and it's pretty fast. But I wonder if there is already a good solution that does this with a GUI (or can be adapted to do this easily)

    Read the article

  • sharepoint administrative server not started causing sharepoint deployment instability

    - by Nathan
    When i deploy anything to a variety of sharepoint servers i get problems with some packages failing, from what i can see the only error given is as below. The timer job for this operation has been created, but it will fail because the administrative service for this server is not enabled. If the timer job is sched uled to run at a later time, you can run the jobs all at once using stsadm.exe - o execadmsvcjobs. To avoid this problem in the future, enable the Windows ShareP oint Services administrative service, or run your operation through the STSADM.e xe command line utility. Is this a known problem? I googled for the text but got only one other person with the problem who simply worked round it. I'm trying to batch deploy solutions programmatically and so these errors render the whole code worthless if you have to go back and redo bits by hand afterwards. Is batch deployment simply not possible? Thanks!

    Read the article

  • Implementing emailing (bulk & event based) features for my website.

    - by Kabeer
    Hello. For my upcoming social networking website, I am looking for suggestions on the best way to implement emailing. Here are some of my requirements and constraints: Requirements: - Should be able to send emails based on events (new registrations, change password, etc.), promotions (advertisements based on user consent), bulk mails (newsletters), reminders (profile updates), etc. I hope I got the point through. - Should be able to process faults (incorrect email address, mail-box full, etc) - User initiated invites (inviting friends to connect) Constraints: - As of now I am looking at Godaddy for hosting. Subsequently I shall move to, may be Amazon Cloud. Godaddy seems to be excruciatingly conservative (not bad always) when it comes to the ability to send email. - My tests on Godaddy so far have been discouraging. There is limit to no. of emails I can send and sometimes if emails carries special characters it throws strange exceptions like there was a virus affected attachment (even though I hadn't attached a thing). The replies from Godaddy support have been equally funny. My intent is not to portray Godaddy as wrong but I am looking for a work-around that frees me from said constraints. I am looking for a mechanism / service that is either free of very cost effective. I wonder how other sites address this. Mine is a .Net / Windows based application.

    Read the article

  • jqGrid multi-checkbox custom edittype solution

    - by gsiler
    For those of you trying to understand jqGrid custom edit types ... I created a multi-checkbox form element, and thought I'd share. This was built using version 3.6.4. If anyone has a more efficient solution, please pass it on. Within the colModel, the appropriate edit fields look like this: edittype:'custom' editoptions:{ custom_element:MultiCheckElem, custom_value:MultiCheckVal, list:'Check1,Check2,Check3,Check4' } Here are the javascript functions (BTW, It also works – with some modifications – when the list of checkboxes is in a DIV block): //———————————————————— // Description: // MultiCheckElem is the "custom_element" function that builds the custom multiple check box input // element. From what I have gathered, jqGrid calls this the first time the form is launched. After // that, only the "custom_value" function is called. // // The full list of checkboxes is in the jqGrid "editoptions" section "list" tag (in the options // parameter). //———————————————————— function MultiCheckElem( value, options ) { //———- // for each checkbox in the list // build the input element // set the initial "checked" status // endfor //———- var ctl = ''; var ckboxAry = options.list.split(','); for ( var i in ckboxAry ) { var item = ckboxAry[i]; ctl += '<input type="checkbox" '; if ( value.indexOf(item + '|') != -1 ) ctl += 'checked="checked" '; ctl += 'value="' + item + '"> ' + item + '</input><br />&nbsp;'; } ctl = ctl.replace( /<br />&nbsp;$/, '' ); return ctl; } //———————————————————— // Description: // MultiCheckVal is the "custom_value" function for the custom multiple check box input element. It // appears that jqGrid invokes this function the first time the form is submitted and, the rest of // the time, when the form is launched (action = set) and when it is submitted (action = 'get'). //———————————————————— function MultiCheckVal(elem, action, val) { var items = ''; if (action == 'get') // the form has been submitted { //———- // for each input element // if it's checked, add it to the list of items // endfor //———- for (var i in elem) { if (elem[i].tagName == 'INPUT' && elem[i].checked ) items += elem[i].value + ','; } // items contains a comma delimited list that is returned as the result of the element items = items.replace(/,$/, ''); } else // the form is launched { //———- // for each input element // based on the input value, set the checked status // endfor //———- for (var i in elem) { if (elem[i].tagName == 'INPUT') { if (val.indexOf(elem[i].value + '|') == -1) elem[i].checked = false; else elem[i].checked = true; } } // endfor } return items; }

    Read the article

  • Marshal.PtrToStructure (and back again) and generic solution for endianness swapping

    - by cgyDeveloper
    I have a system where a remote agent sends serialized structures (from and embedded C system) for me to read and store via IP/UDP. In some cases I need to send back the same structure types. I thought I had a nice setup using Marshal.PtrToStructure (receive) and Marshal.StructureToPtr (send). However, a small gotcha is that the network big endian integers need to be converted to my x86 little endian format to be used locally. When I'm sending them off again, big endian is the way to go. Here are the functions in question: private static T BytesToStruct<T>(ref byte[] rawData) where T: struct { T result = default(T); GCHandle handle = GCHandle.Alloc(rawData, GCHandleType.Pinned); try { IntPtr rawDataPtr = handle.AddrOfPinnedObject(); result = (T)Marshal.PtrToStructure(rawDataPtr, typeof(T)); } finally { handle.Free(); } return result; } private static byte[] StructToBytes<T>(T data) where T: struct { byte[] rawData = new byte[Marshal.SizeOf(data)]; GCHandle handle = GCHandle.Alloc(rawData, GCHandleType.Pinned); try { IntPtr rawDataPtr = handle.AddrOfPinnedObject(); Marshal.StructureToPtr(data, rawDataPtr, false); } finally { handle.Free(); } return rawData; } And a quick example structure that might be used like this: byte[] data = this.sock.Receive(ref this.ipep); Request request = BytesToStruct<Request>(ref data); Where the structure in question looks like: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi, Pack = 1)] private struct Request { public byte type; public short sequence; [MarshalAs(UnmanagedType.ByValArray, SizeConst = 5)] public byte[] address; } What (generic) way can I swap the endianness when marshalling the structures? My need is such that the locally stored 'public short sequence' in this example will be little-endian for displaying to the user. I don't want to have to swap the endianness on a structure-specific way. My first thought was to use Reflection, but I'm not very familiar with that feature. Also, I hoped that there would be a better solution out there that somebody could point me towards. Thanks in advance :)

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >