Search Results

Search found 11358 results on 455 pages for 'utf 16'.

Page 74/455 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Terminal, vim and ssh color problem

    - by xaph
    I'm using vim for my editor. I've problems about the colors of vim. The same colorscheme of vim gives different outputs in terminal, ssh session and screen. I learnt they've different colors(16, 88 or 256). I don't care of using 16 color or 256 color. Also the terminal emulator is not very important to me. My questions: 1- How do I use same vim colorscheme with same output on everywhere? 2- I want to write a color definition and use it every terminal(maybe with Xdefaults file). Is it possible?

    Read the article

  • How can I change the amount and size of Linux ramdisks (/dev/ram0 - /dev/ram15)?

    - by Kevin S.
    Using Linux, when I boot I automatically have 16 16MB ramdisks, however, I would like to create one really large ramdisk to test some software. I found that I can adjust the size of the ramdisks already on the system with the kernel boot parameter ramdisk_size however, this makes all 16 ramdisks (/dev/ram0 - /dev/ram15) the size that is specified. So if I want to create a 1GB ramdisk, I would need 16GB of memory. Basically, I want to create one 10GB ramdisk which would be /dev/ram0. How would I go about doing that? I assume there is a kernel boot parameter, but I just haven't found it.

    Read the article

  • USB hard drive not recognized

    - by user318772
    Until recently I was using the portable USB hard drive in my win 7 laptop and ubuntu laptop. Suddenly now none of the laptops recognize it. This is the message i get by doing lsusb... Bus 001 Device 004: ID 1058:1010 Western Digital Technologies, Inc. Elements External HDD Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 003: ID 0b97:7762 O2 Micro, Inc. Oz776 SmartCard Reader Bus 003 Device 002: ID 0b97:7761 O2 Micro, Inc. Oz776 1.1 Hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 413c:a005 Dell Computer Corp. Internal 2.0 Hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub fdisk doesn't show the external hard drive Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004a743 Device Boot Start End Blocks Id System /dev/sda1 * 2048 152111103 76054528 83 Linux /dev/sda2 152113150 156301311 2094081 5 Extended /dev/sda5 152113152 156301311 2094080 82 Linux swap / Solaris when i do testdisk TestDisk 6.14, Data Recovery Utility, July 2013 Christophe GRENIER <[email protected]> http://www.cgsecurity.org TestDisk is free software, and comes with ABSOLUTELY NO WARRANTY. Select a media (use Arrow keys, then press Enter): >Disk /dev/sda - 80 GB / 74 GiB - ST980825AS Disk /dev/sdb - 2199 GB / 2048 GiB testdisk-> Intel->analyse I get partition error Disk /dev/sdb - 2199 GB / 2048 GiB - CHS 2097152 64 32 Current partition structure: Partition Start End Size in sectors Partition: Read error Here is the output of dmesg [11948.549171] Add. Sense: Invalid command operation code [11948.549177] sd 2:0:0:0: [sdb] CDB: [11948.549181] Read(16): 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 [11948.550489] sd 2:0:0:0: [sdb] Invalid command failure [11948.550495] sd 2:0:0:0: [sdb] [11948.550499] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [11948.550505] sd 2:0:0:0: [sdb] [11948.550508] Sense Key : Illegal Request [current] [11948.550514] Info fld=0x0 [11948.550519] sd 2:0:0:0: [sdb] [11948.550525] Add. Sense: Invalid command operation code [11948.550531] sd 2:0:0:0: [sdb] CDB: [11948.550534] Read(16): 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 [11948.551870] sd 2:0:0:0: [sdb] Invalid command failure [11948.551876] sd 2:0:0:0: [sdb] [11948.551880] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [11948.551885] sd 2:0:0:0: [sdb] [11948.551888] Sense Key : Illegal Request [current] [11948.551895] Info fld=0x0 [11948.551900] sd 2:0:0:0: [sdb] [11948.551905] Add. Sense: Invalid command operation code [11948.551911] sd 2:0:0:0: [sdb] CDB: [11948.551914] Read(16): 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 If possible i want to retrive at least some data from this hard drive. If thats not possible I would like to format it and use it. Any help will be greatly appreciated Thanks

    Read the article

  • Printing Booklet Page Size in Adobe Reader 4-in-1

    - by Justin Nathanael Waters
    So I have a 70 page pdf document that I'm trying to condense to a small booklet. I tried creating a formula to manually to perform it but it got ugly fast. 35,36,34,37,17,54,16,55,33,38,32,39,15,56,14,57,31,40,30,41,13,58,12,59 29,42,28,43,11,60,10,61,27,44,26,45,9,62,8,63,25,46,24,47,7,64,6,65 23,48,22,49,5,66,4,67,21,50,20,51,3,68,2,69,19,52,18,53,1,70 Once I print the booklet I should be able to cut the sheets in half and set the bottom half behind the top and staple it for a simple book. Which means Page 1 should have pages 35,36,17,54,34,37,16,55 Page 2 should have pages 33,38,15,56,32,39,14,57 And several pages later Page 9 should have pages 19,52,1,70,18,53 But manually doing this is a headache and it seems like the booklet function should contain functionality that can perform this. I'm using a commercial Konica Minolta C452

    Read the article

  • Achieving A 1600 x 900 Resolution For (Guest OS) Ubuntu Under (Host OS) Windows 7

    - by panamack
    I am running Ubuntu 11.10 as a guest OS using VirtualBox 4.1.16 installed on Windows 7 Ultimate. On my laptop I'd like to be able to run Ubuntu in full screen mode at 1600 x 900. I only have options within the virtual machine to select 4:3 display settings such as 1600 x 1200, 1440 x 1050 etc. I have guest additions installed. At the windows command prompt, I tried typing: VBoxManage setextradata "Virtual Ubuntu Coursera ESSAAS" "CustomVideoMode1" "1600x900x16" This didn't work, still no 1600 x 900 res available in Ubuntu. I tried this having read the following section of the VirtualBox help (this also says something about a 'video mode hint feature' not sure what this means): 9.7. Advanced display configuration 9.7.1. Custom VESA resolutions Apart from the standard VESA resolutions, the VirtualBox VESA BIOS allows you to add up to 16 custom video modes which will be reported to the guest operating system. When using Windows guests with the VirtualBox Guest Additions, a custom graphics driver will be used instead of the fallback VESA solution so this information does not apply. Additional video modes can be configured for each VM using the extra data facility. The extra data key is called CustomVideoMode with x being a number from 1 to 16. Please note that modes will be read from 1 until either the following number is not defined or 16 is reached. The following example adds a video mode that corresponds to the native display resolution of many notebook computers: VBoxManage setextradata "VM name" "CustomVideoMode1" "1400x1050x16" The VESA mode IDs for custom video modes start at 0x160. In order to use the above defined custom video mode, the following command line has be supplied to Linux: vga = 0x200 | 0x160 vga = 864 For guest operating systems with VirtualBox Guest Additions, a custom video mode can be set using the video mode hint feature. UPDATE 02.06.12 I've just tried creating a new virtual machine using the same original disk image I had been given. This had Guest Additions v 4.1.6 installed and provided me with the 1600 x 900 full screen display I want. It's after I then install Guest Additions v 4.1.16 (the version included with my VirtualBox installation) that my only choices are 4:3 displays e.g. 1600 x 1200. Seems this is the cause.

    Read the article

  • Excel 2010 -Excel cannot complete this task with available resources

    - by Jestep
    Getting this error when trying to sort a document (Excel cannot complete this task with available resources). Document isn't particularly large, about 4,000 lines. Can't seem to figure out why this would start on this. I can sort this same file fine on everything back to Excel 2000 on older crappy computers. Computer is running Win 7 x64, 16 Gb RAM, and another 16 Gb of virtual. There's no possible way that all of the memory is actually getting exhausted when I can perform this on an older XP machine with 512 Mb of RAM, unless 2010's memory usage is inconceivably poorly designed. I found a few posts on forums stating that there might be a security update related bug. Any suggestions would be appreciated.

    Read the article

  • ESX Firewall Command Troubles

    - by John
    Hi, I am working on creating some firewall rules to stop some of the SSH brute-force attacks that we have seen recently on our ESX server hosts. I have tried the following rules from the CLI to first block all SSH traffic and then allow the two ranges that I am interested in: esxcfg-firewall --ipruleAdd 0.0.0.0/0,22,tcp,REJECT,"Block_SSH" esxcfg-firewall --ipruleAdd 11.130.0.0/16,22,tcp,ACCEPT,"Allow_PUBLIC_SSH" esxcfg-firewall --ipruleAdd 10.130.0.0/16,22,tcp,ACCEPT,"Allow_PRIVATE_SSH" However, these rules are not working as intended. I know that if you do not enter the block rule first, then the allow rule will not be processed. We are now having the issue where the first entered allow rule is being ignored such that the block rule works and the last entered allow rule works. I was curious if anyone had any ideas on how I could allow a few different ranges of IP's with the esxcfg-firewall --ipruleAdd command? I am at a loss and am having a hard time locating examples or further documentation about this. Thanks in advance for your help with this.

    Read the article

  • Excel Single column into rows, VBA script insight

    - by Sanityvoid
    Okay, so much similiar to the below link but mine is a bit different. Paginate Rows into Columns in Excel I have a lot of data in column A, I want to take every 14 to 15 rows and make them a new row with multiple columns. I'm trying to get it into a format where SQL can intake the data. I figured the best way was to get them into rows then make a CSV with the data. So it would like like below: (wow, the format totally didn't stick when posting) column A column B C D etc 1 1 2 3 x 2 16 17 a b 3 x y z 15 16 17 a b c I can clarify if needed, but I'm stumped on how to get the data out of the single column with so many rows in the column. Thanks for the help!!!

    Read the article

  • Does connecting to the default host via public IP from within its subnet cause any issues?

    - by username
    I'm setting up a small office network with a single public IP (let's say it's 69.16.230.117). I've configured NAT on the router with incoming traffic forwarded to the server (say the server has a private IP of 192.168.0.2). Is it okay to configure the client machines on the same subnet to access the server via the router's public IP (69.16.230.117)? In practice it's never caused me problems, but I've heard, here and there, that it is a bad idea, and one should use the private IP (192.168.0.2). Does connecting to the default host via public IP from within its subnet cause any issues? Please refrain from writing "never! it breaks the intranet!" ;-)

    Read the article

  • What components should I upgrade to squeeze more life of my pc? [on hold]

    - by Jared
    Hi my current specs are CPU: Intel Core i5 750 2.66 GHz, Socket 1156 Motherboard: Asus P7P55 LX Motherboard, Socket 1156, 4xDIMM DDR3, 2xPCIe-16, 3xPCI, 2xPCIe-1, 14xUSB2, Audio, 1xATA, 6xSATA, RAID, ATX HDD: Seagate Barracuda 7200.11 ST31500341AS Hard Disk Drive, 1500GB, 7200rpm, 32MB Cache, SATA-2 RAM: G.Skill Ripjaw F3-12800CL8D-4GBRM, 2x2GB, DDR3-1600, PC3-12800, CL8, DIMM Video: MSI Radeon 5850 R5850-PM2D1G Video Card, 1024MB, DDR5, PCIe-16, DVI, CrossFire, HDMI PSU: Vantec ION2, 620W ATX PSU, SLI Ready, Black Firstly I know it badly needs an SSD, for which I'm thinking of the Samsung 250gb 840 EVO series. The ram could do with another 4GB, for which I am thinking of replacing the 2x2GB for 2x4GB of the same type. My video card is struggling with the latest releases, so vitally should I ditch the video card and go for something newer (ATI 7850, GTX 660?) or try and get another 5850 for crossfire support? Bearing in mind 5850's are sort of hard to come buy now as they are not stocked in shops usually, which means they have to be bought second hand.

    Read the article

  • (date) script for freenas

    - by malgaboy
    My webserver is running freenas 8.0.4. Currently in /etc/crontab there is this task (daily) find /mnt/vol1/1 -type f -mtime +16 -exec rm -R {} \; This allow me to delete files older than 16 days.. infact every night a new file is added in /mnt/vol1/1 folder.. but now I wanna make a script that check the folder and keeps only the youngest 2 file.. whitout taking in consideration the days.. Thanks in advance. (sorry for my bad english).

    Read the article

  • Drawing a WPF UserControl with DataBinding to an Image

    - by LorenVS
    Hey Everyone, So I'm trying to use a WPF User Control to generate a ton of images from a dataset where each item in the dataset would produce an image... I'm hoping I can set it up in such a way that I can use WPF databinding, and for each item in the dataset, create an instance of my user control, set the dependency property that corresponds to my data item, and then draw the user control to an image, but I'm having problems getting it all working (not sure whether databinding or drawing to the image is my problem) Sorry for the massive code dump, but I've been trying to get this working for a couple of hours now, and WPF just doesn't like me (have to learn at some point though...) My User Control looks like this: <UserControl x:Class="Bleargh.ImageTemplate" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:c="clr-namespace:Bleargh" x:Name="ImageTemplateContainer" Height="300" Width="300"> <Canvas> <TextBlock Canvas.Left="50" Canvas.Top="50" Width="200" Height="25" FontSize="16" FontFamily="Calibri" Text="{Binding Path=Booking.Customer,ElementName=ImageTemplateContainer}" /> <TextBlock Canvas.Left="50" Canvas.Top="100" Width="200" Height="25" FontSize="16" FontFamily="Calibri" Text="{Binding Path=Booking.Location,ElementName=ImageTemplateContainer}" /> <TextBlock Canvas.Left="50" Canvas.Top="150" Width="200" Height="25" FontSize="16" FontFamily="Calibri" Text="{Binding Path=Booking.ItemNumber,ElementName=ImageTemplateContainer}" /> <TextBlock Canvas.Left="50" Canvas.Top="200" Width="200" Height="25" FontSize="16" FontFamily="Calibri" Text="{Binding Path=Booking.Description,ElementName=ImageTemplateContainer}" /> </Canvas> </UserControl> And I've added a dependency property of type "Booking" to my user control that I'm hoping will be the source for the databound values: public partial class ImageTemplate : UserControl { public static readonly DependencyProperty BookingProperty = DependencyProperty.Register("Booking", typeof(Booking), typeof(ImageTemplate)); public Booking Booking { get { return (Booking)GetValue(BookingProperty); } set { SetValue(BookingProperty, value); } } public ImageTemplate() { InitializeComponent(); } } And I'm using the following code to render the control: List<Booking> bookings = Booking.GetSome(); for(int i = 0; i < bookings.Count; i++) { ImageTemplate template = new ImageTemplate(); template.Booking = bookings[i]; RenderTargetBitmap bitmap = new RenderTargetBitmap( (int)template.Width, (int)template.Height, 120.0, 120.0, PixelFormats.Pbgra32); bitmap.Render(template); BitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(bitmap)); using (Stream s = File.OpenWrite(@"C:\Code\Bleargh\RawImages\" + i.ToString() + ".png")) { encoder.Save(s); } } EDIT: I should add that the process works without any errors whatsoever, but I end up with a directory full of plain-white images, not text or anything... And I have confirmed using the debugger that my Booking objects are being filled with the proper data... EDIT 2: Did something I should have done a long time ago, set a background on my canvas, but that didn't change the output image at all, so my problem is most definitely somehow to do with my drawing code (although there may be something wrong with my databinding too)

    Read the article

  • Convert NSData to primitive variable with ieee-754 or twos-complement ?

    - by William GILLARD
    Hi every one. I am new programmer in Obj-C and cocoa. Im a trying to write a framework which will be used to read a binary files (Flexible Image Transport System or FITS binary files, usually used by astronomers). The binary data, that I am interested to extract, can have various formats and I get its properties by reading the header of the FITS file. Up to now, I manage to create a class to store the content of the FITS file and to isolate the header into a NSString object and the binary data into a NSData object. I also manage to write method which allow me to extract the key values from the header that are very valuable to interpret the binary data. I am now trying to convert the NSData object into a primitive array (array of double, int, short ...). But, here, I get stuck and would appreciate any help. According to the documentation I have about the FITS file, I have 5 possibilities to interpret the binary data depending on the value of the BITPIX key: BITPIX value | Data represented 8 | Char or unsigned binary int 16 | 16-bit two's complement binary integer 32 | 32-bit two's complement binary integer 64 | 64-bit two's complement binary integer -32 | IEEE single precision floating-point -64 | IEEE double precision floating-point I already write the peace of code, shown bellow, to try to convert the NSData into a primitive array. // self reefer to my FITS class which contain a NSString object // with the content of the header and a NSData object with the binary data. -(void*) GetArray { switch (BITPIX) { case 8: return [self GetArrayOfUInt]; break; case 16: return [self GetArrayOfInt]; break; case 32: return [self GetArrayOfLongInt]; break; case 64: return [self GetArrayOfLongLong]; break; case -32: return [self GetArrayOfFloat]; break; case -64: return [self GetArrayOfDouble]; break; default: return NULL; } } // then I show you the method to convert the NSData into a primitive array. // I restrict my example to the case of 'double'. Code is similar for other methods // just change double by 'unsigned int' (BITPIX 8), 'short' (BITPIX 16) // 'int' (BITPIX 32) 'long lon' (BITPIX 64), 'float' (BITPIX -32). -(double*) GetArrayOfDouble { int Nelements=[self NPIXEL]; // Metod to extract, from the header // the number of element into the array NSLog(@"TOTAL NUMBER OF ELEMENTS [%i]\n",Nelements); //CREATE THE ARRAY double (*array)[Nelements]; // Get the total number of bits in the binary data int Nbit = abs(BITPIX)*GCOUNT*(PCOUNT + Nelements); // GCOUNT and PCOUNT are defined // into the header NSLog(@"TOTAL NUMBER OF BIT [%i]\n",Nbit); int i=0; //FILL THE ARRAY double Value; for(int bit=0; bit < Nbit; bit+=sizeof(double)) { [Img getBytes:&Value range:NSMakeRange(bit,sizeof(double))]; NSLog(@"[%i]:(%u)%.8G\n",i,bit,Value); (*array)[i]=Value; i++; } return (*array); } However, the value I print in the loop are very different from the expected values (compared using official FITS software). Therefore, I think that the Obj-C double does not use the IEEE-754 convention as well as the Obj-C int are not twos-complement. I am really not familiar with this two convention (IEEE and twos-complement) and would like to know how I can do this conversion with Obj-C. In advance many thanks for any help or information.

    Read the article

  • maven sonar problem

    - by senzacionale
    I want to use sonar for analysis but i can't get any data in localhost:9000 <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <artifactId>KIS</artifactId> <groupId>KIS</groupId> <version>1.0</version> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.4</version> <executions> <execution> <id>compile</id> <phase>compile</phase> <configuration> <tasks> <property name="compile_classpath" refid="maven.compile.classpath"/> <property name="runtime_classpath" refid="maven.runtime.classpath"/> <property name="test_classpath" refid="maven.test.classpath"/> <property name="plugin_classpath" refid="maven.plugin.classpath"/> <ant antfile="${basedir}/build.xml"> <target name="maven-compile"/> </ant> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> output when running sonar: jar file is empty [INFO] Executed tasks [INFO] [resources:testResources {execution: default-testResources}] [WARNING] Using platform encoding (Cp1250 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] skip non existing resourceDirectory J:\ostalo_6i\KIS deploy\ANT\src\test\resources [INFO] [compiler:testCompile {execution: default-testCompile}] [INFO] No sources to compile [INFO] [surefire:test {execution: default-test}] [INFO] No tests to run. [INFO] [jar:jar {execution: default-jar}] [WARNING] JAR will be empty - no content was marked for inclusion! [INFO] Building jar: J:\ostalo_6i\KIS deploy\ANT\target\KIS-1.0.jar [INFO] [install:install {execution: default-install}] [INFO] Installing J:\ostalo_6i\KIS deploy\ANT\target\KIS-1.0.jar to C:\Documents and Settings\MitjaG\.m2\repository\KIS\KIS\1.0\KIS-1.0.jar [INFO] ------------------------------------------------------------------------ [INFO] Building Unnamed - KIS:KIS:jar:1.0 [INFO] task-segment: [sonar:sonar] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] [sonar:sonar {execution: default-cli}] [INFO] Sonar host: http://localhost:9000 [INFO] Sonar version: 2.1.2 [INFO] [sonar-core:internal {execution: default-internal}] [INFO] Database dialect class org.sonar.api.database.dialect.Oracle [INFO] ------------- Analyzing Unnamed - KIS:KIS:jar:1.0 [INFO] Selected quality profile : KIS, language=java [INFO] Configure maven plugins... [INFO] Sensor SquidSensor... [INFO] Sensor SquidSensor done: 16 ms [INFO] Sensor JavaSourceImporter... [INFO] Sensor JavaSourceImporter done: 0 ms [INFO] Sensor AsynchronousMeasuresSensor... [INFO] Sensor AsynchronousMeasuresSensor done: 15 ms [INFO] Sensor SurefireSensor... [INFO] parsing J:\ostalo_6i\KIS deploy\ANT\target\surefire-reports [INFO] Sensor SurefireSensor done: 47 ms [INFO] Sensor ProfileSensor... [INFO] Sensor ProfileSensor done: 16 ms [INFO] Sensor ProjectLinksSensor... [INFO] Sensor ProjectLinksSensor done: 0 ms [INFO] Sensor VersionEventsSensor... [INFO] Sensor VersionEventsSensor done: 31 ms [INFO] Sensor CpdSensor... [INFO] Sensor CpdSensor done: 0 ms [INFO] Sensor Maven dependencies... [INFO] Sensor Maven dependencies done: 16 ms [INFO] Execute decorators... [INFO] ANALYSIS SUCCESSFUL, you can browse http://localhost:9000 [INFO] Database optimization... [INFO] Database optimization done: 172 ms [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6 minutes 16 seconds [INFO] Finished at: Fri Jun 11 08:28:26 CEST 2010 [INFO] Final Memory: 24M/43M [INFO] ------------------------------------------------------------------------ any idea why, i successfully compile with maven ant plugin java project.

    Read the article

  • processing an audio wav file with C

    - by sa125
    Hi - I'm working on processing the amplitude of a wav file and scaling it by some decimal factor. I'm trying to wrap my head around how to read and re-write the file in a memory-efficient way while also trying to tackle the nuances of the language (I'm new to C). The file can be in either an 8- or 16-bit format. The way I thought of doing this is by first reading the header data into some pre-defined struct, and then processing the actual data in a loop where I'll read a chunk of data into a buffer, do whatever is needed to it, and then write it to the output. #include <stdio.h> #include <stdlib.h> typedef struct header { char chunk_id[4]; int chunk_size; char format[4]; char subchunk1_id[4]; int subchunk1_size; short int audio_format; short int num_channels; int sample_rate; int byte_rate; short int block_align; short int bits_per_sample; short int extra_param_size; char subchunk2_id[4]; int subchunk2_size; } header; typedef struct header* header_p; void scale_wav_file(char * input, float factor, int is_8bit) { FILE * infile = fopen(input, "rb"); FILE * outfile = fopen("outfile.wav", "wb"); int BUFSIZE = 4000, i, MAX_8BIT_AMP = 255, MAX_16BIT_AMP = 32678; // used for processing 8-bit file unsigned char inbuff8[BUFSIZE], outbuff8[BUFSIZE]; // used for processing 16-bit file short int inbuff16[BUFSIZE], outbuff16[BUFSIZE]; // header_p points to a header struct that contains the file's metadata fields header_p meta = (header_p)malloc(sizeof(header)); if (infile) { // read and write header data fread(meta, 1, sizeof(header), infile); fwrite(meta, 1, sizeof(meta), outfile); while (!feof(infile)) { if (is_8bit) { fread(inbuff8, 1, BUFSIZE, infile); } else { fread(inbuff16, 1, BUFSIZE, infile); } // scale amplitude for 8/16 bits for (i=0; i < BUFSIZE; ++i) { if (is_8bit) { outbuff8[i] = factor * inbuff8[i]; if ((int)outbuff8[i] > MAX_8BIT_AMP) { outbuff8[i] = MAX_8BIT_AMP; } } else { outbuff16[i] = factor * inbuff16[i]; if ((int)outbuff16[i] > MAX_16BIT_AMP) { outbuff16[i] = MAX_16BIT_AMP; } else if ((int)outbuff16[i] < -MAX_16BIT_AMP) { outbuff16[i] = -MAX_16BIT_AMP; } } } // write to output file for 8/16 bit if (is_8bit) { fwrite(outbuff8, 1, BUFSIZE, outfile); } else { fwrite(outbuff16, 1, BUFSIZE, outfile); } } } // cleanup if (infile) { fclose(infile); } if (outfile) { fclose(outfile); } if (meta) { free(meta); } } int main (int argc, char const *argv[]) { char infile[] = "file.wav"; float factor = 0.5; scale_wav_file(infile, factor, 0); return 0; } I'm getting differing file sizes at the end (by 1k or so, for a 40Mb file), and I suspect this is due to the fact that I'm writing an entire buffer to the output, even though the file may have terminated before filling the entire buffer size. Also, the output file is messed up - won't play or open - so I'm probably doing the whole thing wrong. Any tips on where I'm messing up will be great. Thanks!

    Read the article

  • calling delphi dll from c#

    - by Wouter Roux
    Hi, I have a Delphi dll defined like this: TMPData = record Lastname, Firstname: array[0..40] of char; Birthday: TDateTime; Pid: array[0..16] of char; Title: array[0..20] of char; Female: Boolean; Street: array[0..40] of char; ZipCode: array[0..10] of char; City: array[0..40] of char; Phone, Fax, Department, Company: array[0..20] of char; Pn: array[0..40] of char; In: array[0..16] of char; Hi: array[0..8] of char; Account: array[0..20] of char; Valid, Status: array[0..10] of char; Country, NameAffix: array[0..20] of char; W, H: single; Bp: array[0..10] of char; SocialSecurityNumber: array[0..9] of char; State: array[0..2] of char; end; function Init(const tmpData: TMPData; var ErrorCode: integer; ResetFatalError: boolean = false): boolean; procedure GetData(out tmpData: TMPData); My current c# signatures looks like this: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)] public struct TMPData { [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string Lastname; [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string Firstname; [MarshalAs(UnmanagedType.R8)] public double Birthday; [MarshalAs(UnmanagedType.LPStr, SizeConst = 16)] public string Pid; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Title; [MarshalAs(UnmanagedType.Bool)] public bool Female; [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string Street; [MarshalAs(UnmanagedType.LPStr, SizeConst = 10)] public string ZipCode; [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string City; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Phone; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Fax; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Department; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Company; [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string Pn; [MarshalAs(UnmanagedType.LPStr, SizeConst = 16)] public string In; [MarshalAs(UnmanagedType.LPStr, SizeConst = 8)] public string Hi; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Account; [MarshalAs(UnmanagedType.LPStr, SizeConst = 10)] public string Valid; [MarshalAs(UnmanagedType.LPStr, SizeConst = 10)] public string Status; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Country; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string NameAffix; [MarshalAs(UnmanagedType.I4)] public int W; [MarshalAs(UnmanagedType.I4)] public int H; [MarshalAs(UnmanagedType.LPStr, SizeConst = 10)] public string Bp; [MarshalAs(UnmanagedType.LPStr, SizeConst = 9)] public string SocialSecurityNumber; [MarshalAs(UnmanagedType.LPStr, SizeConst = 2)] public string State; } [DllImport("MyDll.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool Init(TMPData tmpData, int ErrorCode, bool ResetFatalError); [DllImport("MyDll.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool GetData(out TMPData tmpData); I first call Init setting the BirthDay, LastName and FirstName. I then call GetData but the TMPData structure I get back is incorrect. The FirstName, LastName and Birthday fields are populated but the data is incorrect. Is the mapping correct? ( "array[0..40] of char" equal to "[MarshalAs(UnmanagedType.LPStr, SizeConst = 40)]" )?

    Read the article

  • Is it possible to implement bitwise operators using integer arithmetic?

    - by Statement
    Hello World! I am facing a rather peculiar problem. I am working on a compiler for an architecture that doesn't support bitwise operations. However, it handles signed 16 bit integer arithmetics and I was wondering if it would be possible to implement bitwise operations using only: Addition (c = a + b) Subtraction (c = a - b) Division (c = a / b) Multiplication (c = a * b) Modulus (c = a % b) Minimum (c = min(a, b)) Maximum (c = max(a, b)) Comparisons (c = (a < b), c = (a == b), c = (a <= b), et.c.) Jumps (goto, for, et.c.) The bitwise operations I want to be able to support are: Or (c = a | b) And (c = a & b) Xor (c = a ^ b) Left Shift (c = a << b) Right Shift (c = a b) (All integers are signed so this is a problem) Signed Shift (c = a b) One's Complement (a = ~b) (Already found a solution, see below) Normally the problem is the other way around; how to achieve arithmetic optimizations using bitwise hacks. However not in this case. Writable memory is very scarce on this architecture, hence the need for bitwise operations. The bitwise functions themselves should not use a lot of temporary variables. However, constant read-only data & instruction memory is abundant. A side note here also is that jumps and branches are not expensive and all data is readily cached. Jumps cost half the cycles as arithmetic (including load/store) instructions do. On other words, all of the above supported functions cost twice the cycles of a single jump. Some thoughts that might help: I figured out that you can do one's complement (negate bits) with the following code: // Bitwise one's complement b = ~a; // Arithmetic one's complement b = -1 - a; I also remember the old shift hack when dividing with a power of two so the bitwise shift can be expressed as: // Bitwise left shift b = a << 4; // Arithmetic left shift b = a * 16; // 2^4 = 16 // Signed right shift b = a >>> 4; // Arithmetic right shift b = a / 16; For the rest of the bitwise operations I am slightly clueless. I wish the architects of this architecture would have supplied bit-operations. I would also like to know if there is a fast/easy way of computing the power of two (for shift operations) without using a memory data table. A naive solution would be to jump into a field of multiplications: b = 1; switch (a) { case 15: b = b * 2; case 14: b = b * 2; // ... exploting fallthrough (instruction memory is magnitudes larger) case 2: b = b * 2; case 1: b = b * 2; } Or a Set & Jump approach: switch (a) { case 15: b = 32768; break; case 14: b = 16384; break; // ... exploiting the fact that a jump is faster than one additional mul // at the cost of doubling the instruction memory footprint. case 2: b = 4; break; case 1: b = 2; break; }

    Read the article

  • How do I get confidence intervals without inverting a singular Hessian matrix?

    - by AmalieNot
    Hello. I recently posted this to reddit and it was suggested I come here, so here I am. I'm a student working on an epidemiology model in R, using maximum likelihood methods. I created my negative log likelihood function. It's sort of gross looking, but here it is: NLLdiff = function(v1, CV1, v2, CV2, st1 = (czI01 - czV01), st2 = (czI02 - czV02), st01 = czI01, st02 = czI02, tt1 = czT01, tt2 = czT02) { prob1 = (1 + v1 * CV1 * tt1)^(-1/CV1) prob2 = ( 1 + v2 * CV2 * tt2)^(-1/CV2) -(sum(dbinom(st1, st01, prob1, log = T)) + sum(dbinom(st2, st02, prob2, log = T))) } The reason the first line looks so awful is because most of the data it takes is inputted there. czI01, for example, is already declared. I did this simply so that my later calls to the function don't all have to have awful vectors in them. I then optimized for CV1, CV2, v1 and v2 using mle2 (library bbmle). That's also a bit gross looking, and looks like: ml.cz.diff = mle2 (NLLdiff, start=list(v1 = vguess, CV1 = cguess, v2 = vguess, CV2 = cguess), method="L-BFGS-B", lower = 0.0001) Now, everything works fine up until here. ml.cz.diff gives me values that I can turn into a plot that reasonably fits my data. I also have several different models, and can get AICc values to compare them. However, when I try to get confidence intervals around v1, CV1, v2 and CV2 I have problems. Basically, I get a negative bound on CV1, which is impossible as it actually represents a square number in the biological model as well as some warnings. The warnings are this: http://i.imgur.com/B3H2l.png . Is there a better way to get confidence intervals? Or, really, a way to get confidence intervals that make sense here? What I see happening is that, by coincidence, my hessian matrix is singular for some values in the optimization space. But, since I'm optimizing over 4 variables and don't have overly extensive programming knowledge, I can't come up with a good method of optimization that doesn't rely on the hessian. I have googled the problem - it suggested that my model's bad, but I'm reconstructing some work done before which suggests that my model's really not awful (the plots I make using the ml.cz.diff look like the plots of the original work). I have also read the relevant parts of the manual as well as Bolker's book Ecological Models in R. I have also tried different optimization methods, which resulted in a longer run time but the same errors. The "SANN" method didn't finish running within an hour, so I didn't wait around to see the result. tl;dr : my confidence intervals are bad, is there a relatively straightforward way to fix them in R. My vectors are: czT01 = c(5, 5, 5, 5, 5, 5, 5, 25, 25, 25, 25, 25, 25, 25, 50, 50, 50, 50, 50, 50, 50) czT02 = c(5, 5, 5, 5, 5, 10, 10, 10, 10, 10, 25, 25, 25, 25, 25, 50, 50, 50, 50, 50, 75, 75, 75, 75, 75) czI01 = c(25, 24, 22, 22, 26, 23, 25, 25, 25, 23, 25, 18, 21, 24, 22, 23, 25, 23, 25, 25, 25) czI02 = c(13, 16, 5, 18, 16, 13, 17, 22, 13, 15, 15, 22, 12, 12, 13, 13, 11, 19, 21, 13, 21, 18, 16, 15, 11) czV01 = c(1, 4, 5, 5, 2, 3, 4, 11, 8, 1, 11, 12, 10, 16, 5, 15, 18, 12, 23, 13, 22) czV02 = c(0, 3, 1, 5, 1, 6, 3, 4, 7, 12, 2, 8, 8, 5, 3, 6, 4, 6, 11, 5, 11, 1, 13, 9, 7) and I get my guesses by: v = -log((c(czI01, czI02) - c(czV01, czV02))/c(czI01, czI02))/c(czT01, czT02) vguess = mean(v) cguess = var(v)/vguess^2 It's also possible that I'm doing something else completely wrong, but my results seem reasonable so I haven't caught it.

    Read the article

  • AES BYTE SYSTOLIC ARCHITECTURE.

    - by anum
    we are implementing AES BYTE SYSTOLIC ARCHITECTURE. CODE:- module key_expansion(kld,clk,key,key_expand,en); input kld,clk,en; input [127:0] key; wire [31:0] w0,w1,w2,w3; output [127:0] key_expand; reg[127:0] key_expand; reg [31:0] w[3:0]; reg [3:0] ctr; //reg [31:0] w0,w1,w2,w3; wire [31:0] c0,c1,c2,c3; wire [31:0] tmp_w; wire [31:0] subword; wire [31:0] rcon; assign w0 = w[0]; assign w1 = w[1]; assign w2 = w[2]; assign w3 = w[3]; //always @(posedge clk) always @(posedge clk) begin w[0] <= #1 kld ? key[127:096] : w[0]^subword^rcon; end always @(posedge clk) begin w[1] <= #1 kld ? key[095:064] : w[0]^w[1]^subword^rcon; end always @(posedge clk) begin w[2] <= #1 kld ? key[063:032] : w[0]^w[2]^w[1]^subword^rcon; end always @(posedge clk) begin w[3] <= #1 kld ? key[031:000] : w[0]^w[3]^w[2]^w[1]^subword^rcon; end assign tmp_w = w[3]; aes_sbox u0( .a(tmp_w[23:16]), .d(subword[31:24])); aes_sbox u1( .a(tmp_w[15:08]), .d(subword[23:16])); aes_sbox u2( .a(tmp_w[07:00]), .d(subword[15:08])); aes_sbox u3( .a(tmp_w[31:24]), .d(subword[07:00])); aes_rcon r0( .clk(clk), .kld(kld), .out_rcon(rcon)); //assign key_expand={w0,w1,w2,w3}; //assign key_expand={w0,w1,w2,w3}; always@(posedge clk) begin if (!en) begin ctr<=0; end else if (|ctr) begin key_expand<=0; ctr<=(ctr+1)%16; end else if (!(|ctr)) begin key_expand<={w0,w1,w2,w3}; ctr<=(ctr+1)%16; end end endmodule problem:verilog code has been attached THE BASIC problem is that we want to generate a new key after 16 clock cycles.whereas initially it would generate a new key every posedge of clock.in order to stop the value from being assigned to w[0] w[1] w[2] w[3] we implemented an enable counter logic as under.it has enabled us to give output in key_expand after 16 cycles but the value of required keys has bin changed.because the key_expand takes up the latest value from w[0],w[1],w[2],w[3] where as we require the first value generated.. we should block the value to be assigned to w[0] to w[3] somehow ..but we are stuck.plz help.

    Read the article

  • Objective-C: Scope problems cellForRowAtIndexPath

    - by Mr. McPepperNuts
    How would I set each individual row in cellForRowAtIndexPath to the results of an array populated by a Fetch Request? (Fetch Request made when button is pressed.) - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { // ... set up cell code here ... cell.textLabel.text = [results objectAtIndex:indexPath valueForKey:@"name"]; } warning: 'NSArray' may not respond to '-objectAtIndexPath:' Edit: - (NSArray *)SearchDatabaseForText:(NSString *)passdTextToSearchFor{ NSManagedObject *searchObj; XYZAppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; NSManagedObjectContext *managedObjectContext = appDelegate.managedObjectContext; NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"name contains [cd] %@", passdTextToSearchFor]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Entry" inManagedObjectContext:managedObjectContext]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [request setEntity: entity]; [request setPredicate: predicate]; NSError *error; results = [managedObjectContext executeFetchRequest:request error:&error]; // NSLog(@"results %@", results); if([results count] == 0){ NSLog(@"No results found"); searchObj = nil; self.tempString = @"No results found."; }else{ if ([[[results objectAtIndex:0] name] caseInsensitiveCompare:passdTextToSearchFor] == 0) { NSLog(@"results %@", [[results objectAtIndex:0] name]); searchObj = [results objectAtIndex:0]; }else{ NSLog(@"No results found"); self.tempString = @"No results found."; searchObj = nil; } } [tableView reloadData]; [request release]; [sortDescriptors release]; return results; } - (void)searchBarSearchButtonClicked:(UISearchBar *)searchBar{ textToSearchFor = mySearchBar.text; results = [self SearchDatabaseForText:textToSearchFor]; self.tempString = [myGlobalSearchObject valueForKey:@"name"]; NSLog(@"results count: %d", [results count]); NSLog(@"results 0: %@", [[results objectAtIndex:0] name]); NSLog(@"results 1: %@", [[results objectAtIndex:1] name]); } @end Console prints: 2010-06-10 16:11:18.581 XYZApp[10140:207] results count: 2 2010-06-10 16:11:18.581 XYZApp[10140:207] results 0: BB Bugs 2010-06-10 16:11:18.582 XYZApp[10140:207] results 1: BB Annie Program received signal: “EXC_BAD_ACCESS”. (gdb) Edit 2: BT: #0 0x95a91edb in objc_msgSend () #1 0x03b1fe20 in ?? () #2 0x0043cd2a in -[UITableViewRowData(UITableViewRowDataPrivate) _updateNumSections] () #3 0x0043ca9e in -[UITableViewRowData invalidateAllSections] () #4 0x002fc82f in -[UITableView(_UITableViewPrivate) _updateRowData] () #5 0x002f7313 in -[UITableView noteNumberOfRowsChanged] () #6 0x00301500 in -[UITableView reloadData] () #7 0x00008623 in -[SearchViewController SearchDatabaseForText:] (self=0x3d16190, _cmd=0xf02b, passdTextToSearchFor=0x3b29630) #8 0x000086ad in -[SearchViewController searchBarSearchButtonClicked:] (self=0x3d16190, _cmd=0x16492cc, searchBar=0x3d2dc50) #9 0x0047ac13 in -[UISearchBar(UISearchBarStatic) _searchFieldReturnPressed] () #10 0x0031094e in -[UIControl(Deprecated) sendAction:toTarget:forEvent:] () #11 0x00312f76 in -[UIControl(Internal) _sendActionsForEventMask:withEvent:] () #12 0x0032613b in -[UIFieldEditor webView:shouldInsertText:replacingDOMRange:givenAction:] () #13 0x01d5a72d in __invoking___ () #14 0x01d5a618 in -[NSInvocation invoke] () #15 0x0273fc0a in SendDelegateMessage () #16 0x033168bf in -[_WebSafeForwarder forwardInvocation:] () #17 0x01d7e6f4 in ___forwarding___ () #18 0x01d5a6c2 in __forwarding_prep_0___ () #19 0x03320fd4 in WebEditorClient::shouldInsertText () #20 0x0279dfed in WebCore::Editor::shouldInsertText () #21 0x027b67a5 in WebCore::Editor::insertParagraphSeparator () #22 0x0279d662 in WebCore::EventHandler::defaultTextInputEventHandler () #23 0x0276cee6 in WebCore::EventTargetNode::defaultEventHandler () #24 0x0276cb70 in WebCore::EventTargetNode::dispatchGenericEvent () #25 0x0276c611 in WebCore::EventTargetNode::dispatchEvent () #26 0x0279d327 in WebCore::EventHandler::handleTextInputEvent () #27 0x0279d229 in WebCore::Editor::insertText () #28 0x03320f4d in -[WebHTMLView(WebNSTextInputSupport) insertText:] () #29 0x0279d0b4 in -[WAKResponder tryToPerform:with:] () #30 0x03320a33 in -[WebView(WebViewEditingActions) _performResponderOperation:with:] () #31 0x03320990 in -[WebView(WebViewEditingActions) insertText:] () #32 0x00408231 in -[UIWebDocumentView insertText:] () #33 0x003ccd31 in -[UIKeyboardImpl acceptWord:firstDelete:addString:] () #34 0x003d2c8c in -[UIKeyboardImpl addInputString:fromVariantKey:] () #35 0x004d1a00 in -[UIKeyboardLayoutStar sendStringAction:forKey:] () #36 0x004d0285 in -[UIKeyboardLayoutStar handleHardwareKeyDownFromSimulator:] () #37 0x002b5bcb in -[UIApplication handleEvent:withNewEvent:] () #38 0x002b067f in -[UIApplication sendEvent:] () #39 0x002b7061 in _UIApplicationHandleEvent () #40 0x02542d59 in PurpleEventCallback () #41 0x01d55b80 in CFRunLoopRunSpecific () #42 0x01d54c48 in CFRunLoopRunInMode () #43 0x02541615 in GSEventRunModal () #44 0x025416da in GSEventRun () #45 0x002b7faf in UIApplicationMain () #46 0x00002578 in main (argc=1, argv=0xbfffef5c) at /Users/default/Documents/iPhone Projects/XYZApp/main.m:14

    Read the article

  • How to find minimum weight with maximum cost in 0-1 Knapsack algorithm?

    - by Nitin9791
    I am trying to solve a spoj problem Party Schedule the problem statement is- You just received another bill which you cannot pay because you lack the money. Unfortunately, this is not the first time to happen, and now you decide to investigate the cause of your constant monetary shortness. The reason is quite obvious: the lion's share of your money routinely disappears at the entrance of party localities. You make up your mind to solve the problem where it arises, namely at the parties themselves. You introduce a limit for your party budget and try to have the most possible fun with regard to this limit. You inquire beforehand about the entrance fee to each party and estimate how much fun you might have there. The list is readily compiled, but how do you actually pick the parties that give you the most fun and do not exceed your budget? Write a program which finds this optimal set of parties that offer the most fun. Keep in mind that your budget need not necessarily be reached exactly. Achieve the highest possible fun level, and do not spend more money than is absolutely necessary. Input The first line of the input specifies your party budget and the number n of parties. The following n lines contain two numbers each. The first number indicates the entrance fee of each party. Parties cost between 5 and 25 francs. The second number indicates the amount of fun of each party, given as an integer number ranging from 0 to 10. The budget will not exceed 500 and there will be at most 100 parties. All numbers are separated by a single space. There are many test cases. Input ends with 0 0. Output For each test case your program must output the sum of the entrance fees and the sum of all fun values of an optimal solution. Both numbers must be separated by a single space. Example Sample input: 50 10 12 3 15 8 16 9 16 6 10 2 21 9 18 4 12 4 17 8 18 9 50 10 13 8 19 10 16 8 12 9 10 2 12 8 13 5 15 5 11 7 16 2 0 0 Sample output: 49 26 48 32 now I know that it is an advance version of 0/1 knapsack problem where along with maximum cost we also have to find minimum weight that is less than a a given weight and have maximum cost. so I have used dp to solve this problem but still get a wrong awnser on submission while it is perfectly fine with given test cases. My code is typedef vector<int> vi; #define pb push_back #define FOR(i,n) for(int i=0;i<n;i++) int main() { //freopen("input.txt","r",stdin); while(1) { int W,n; cin>>W>>n; if(W==0 && n==0) break; int K[n+1][W+1]; vi val,wt; FOR(i,n) { int x,y; cin>>x>>y; wt.pb(x); val.pb(y); } FOR(i,n+1) { FOR(w,W+1) { if(i==0 || w==0) { K[i][w]=0; } else if (wt[i-1] <= w) { if(val[i-1] + K[i-1][w-wt[i-1]]>=K[i-1][w]) { K[i][w]=val[i-1] + K[i-1][w-wt[i-1]]; } else { K[i][w]=K[i-1][w]; } } else { K[i][w] = K[i-1][w]; } } } int a1=K[n][W],a2; for(int j=0;j<W;j++) { if(K[n][j]==a1) { a2=j; break; } } cout<<a2<<" "<<a1<<"\n"; } return 0; } Could anyone suggest what am I missing??

    Read the article

  • "C variable type sizes are machine dependent." Is it really true? signed & unsigned numbers ;

    - by claws
    Hello, I've been told that C types are machine dependent. Today I wanted to verify it. void legacyTypes() { /* character types */ char k_char = 'a'; //Signedness --> signed & unsigned signed char k_char_s = 'a'; unsigned char k_char_u = 'a'; /* integer types */ int k_int = 1; /* Same as "signed int" */ //Signedness --> signed & unsigned signed int k_int_s = -2; unsigned int k_int_u = 3; //Size --> short, _____, long, long long short int k_s_int = 4; long int k_l_int = 5; long long int k_ll_int = 6; /* real number types */ float k_float = 7; double k_double = 8; } I compiled it on a 32-Bit machine using minGW C compiler _legacyTypes: pushl %ebp movl %esp, %ebp subl $48, %esp movb $97, -1(%ebp) # char movb $97, -2(%ebp) # signed char movb $97, -3(%ebp) # unsigned char movl $1, -8(%ebp) # int movl $-2, -12(%ebp)# signed int movl $3, -16(%ebp) # unsigned int movw $4, -18(%ebp) # short int movl $5, -24(%ebp) # long int movl $6, -32(%ebp) # long long int movl $0, -28(%ebp) movl $0x40e00000, %eax movl %eax, -36(%ebp) fldl LC2 fstpl -48(%ebp) leave ret I compiled the same code on 64-Bit processor (Intel Core 2 Duo) on GCC (linux) legacyTypes: .LFB2: .cfi_startproc pushq %rbp .cfi_def_cfa_offset 16 movq %rsp, %rbp .cfi_offset 6, -16 .cfi_def_cfa_register 6 movb $97, -1(%rbp) # char movb $97, -2(%rbp) # signed char movb $97, -3(%rbp) # unsigned char movl $1, -12(%rbp) # int movl $-2, -16(%rbp)# signed int movl $3, -20(%rbp) # unsigned int movw $4, -6(%rbp) # short int movq $5, -32(%rbp) # long int movq $6, -40(%rbp) # long long int movl $0x40e00000, %eax movl %eax, -24(%rbp) movabsq $4620693217682128896, %rax movq %rax, -48(%rbp) leave ret Observations char, signed char, unsigned char, int, unsigned int, signed int, short int, unsigned short int, signed short int all occupy same no. of bytes on both 32-Bit & 64-Bit Processor. The only change is in long int & long long int both of these occupy 32-bit on 32-bit machine & 64-bit on 64-bit machine. And also the pointers, which take 32-bit on 32-bit CPU & 64-bit on 64-bit CPU. Questions: I cannot say, what the books say is wrong. But I'm missing something here. What exactly does "Variable types are machine dependent mean?" As you can see, There is no difference between instructions for unsigned & signed numbers. Then how come the range of numbers that can be addressed using both is different? I was reading http://stackoverflow.com/questions/2511246/how-to-maintain-fixed-size-of-c-variable-types-over-different-machines I didn't get the purpose of the question or their answers. What maintaining fixed size? They all are the same. I didn't understand how those answers are going to ensure the same size.

    Read the article

  • how convert this c# cod to php

    - by user3694473
    I'm trying to convert this class from C# to php and i wante to convert this c# class to php ... how i can do it Thanks in advance hi I'm trying to convert this class from C# to php and i wante to convert this c# class to php ... how i can do it Thanks in advance public class CreateCode { public string SazBon(string MM) { string RET = ""; string[] ME = new string[25]; for (int i = 1; i < MM.Length; i += 2) { ME[i] = MM[i - 1].ToString(); } for (int j = 0; j < MM.Length; j += 2) { ME[j] = MM[j + 1].ToString(); } ME[20] = "1"; ME[21] = "OH"; ME[22] = "23"; ME[23] = "fXC"; ME[24] = "5"; ME[5] = ME[14]; ME[13] = ME[23]; ME[2] = ME[22]; ME[18] = ME[21]; ME[23] = ME[11]; ME[19] = ME[0]; foreach (string item in ME) { RET += item; } string BACK = Encrypt(RET, RET, 256); BACK = encryptString(BACK); return BACK; } string encryptString(string strToEncrypt) // md5 { UTF8Encoding ue = new UTF8Encoding(); byte[] bytes = ue.GetBytes(strToEncrypt); MD5CryptoServiceProvider md5 = new MD5CryptoServiceProvider(); byte[] hashBytes = md5.ComputeHash(bytes); // Bytes to string return System.Text.RegularExpressions.Regex.Replace (BitConverter.ToString(hashBytes), "-", "").ToLower(); } private byte[] Encrypt(byte[] clearData, byte[] Key, byte[] IV) { MemoryStream ms = new MemoryStream(); Rijndael alg = Rijndael.Create(); alg.Key = Key; alg.IV = IV; CryptoStream cs = new CryptoStream(ms, alg.CreateEncryptor(), CryptoStreamMode.Write); cs.Write(clearData, 0, clearData.Length); cs.Close(); byte[] encryptedData = ms.ToArray(); return encryptedData; } byte[] A; private string Encrypt(string Data, string Password, int Bits) { byte[] clearBytes = System.Text.Encoding.Unicode.GetBytes(Data); PasswordDeriveBytes pdb = new PasswordDeriveBytes(Password, new byte[] { 0x00, 0x01, 0x02, 0x1C, 0x1D, 0x1E, 0x03, 0x04, 0x05, 0x0F, 0x20, 0x21, 0xAD, 0xAF, 0xA4 }); if (Bits == 128) { byte[] encryptedData = Encrypt(clearBytes, pdb.GetBytes(16), pdb.GetBytes(16)); return Convert.ToBase64String(encryptedData); } else if (Bits == 192) { byte[] encryptedData = Encrypt(clearBytes, pdb.GetBytes(24), pdb.GetBytes(16)); return Convert.ToBase64String(encryptedData); } else if (Bits == 256) { byte[] encryptedData = Encrypt(clearBytes, pdb.GetBytes(32), pdb.GetBytes(16)); return Convert.ToBase64String(encryptedData); } else { return string.Concat(Bits); } } // AES }

    Read the article

  • output image is not displayed

    - by gerry chocolatos
    so, im doing an image detection which i have to process on each red, green, n blue element to get the edge map and combine them become one to show the output. but it doesnt show my output image would anyone pls be kind enough to help me? here is my code so far. //get the red element process_red = new int[width * height]; counter = 0; for(int i = 0; i < 256; i++) { for(int j = 0; j < 256; j++) { int clr = buff_red.getRGB(j, i); int red = (clr & 0x00ff0000) >> 16; red = (0xFF<<24)|(red<<16)|(red<<8)|red; process_red[counter] = red; counter++; } } //set threshold value for red element int threshold = 100; for (int x = 0; x < width; x++) { for (int y = 0; y < height; y++) { int bin = (buff_red.getRGB(x, y) & 0x000000ff); if (bin < threshold) bin = 0; else bin = 255; buff_red.setRGB(x,y, 0xff000000 | bin << 16 | bin << 8 | bin); } } and i do the same way for my green n blue elements. and then i wanted to get to combination of the three by doing it this way: //combine the three elements process_combine = new int[width * height]; counter = 0; for(int i = 0; i < 256; i++) { for(int j = 0; j < 256; j++) { int clr_a = buff_red.getRGB(j, i); int ar = clr_a & 0x000000ff; int clr_b = buff_green.getRGB(j, i); int bg = clr_b & 0x000000ff; int clr_c = buff_blue.getRGB(j, i); int cb = clr_b & 0x000000ff; int alpha = 0xff000000; int combine = alpha|(ar<<16)|(bg<<8)|cb; process_combine[counter] = combine; counter++; } } buff_rgb = new BufferedImage(width,height, BufferedImage.TYPE_INT_ARGB); Graphics rgb; rgb = buff_rgb.getGraphics(); rgb.drawImage(output_rgb, 0, 0, null); rgb.dispose(); repaint(); and to show the output whic is from the combining process, i use a draw method: g.drawImage(buff_rgb,800,100,this); but still it doesnt show the image. can anyone pls help me? ur help is really appreciated. thanks.

    Read the article

  • How is it possible my array is broken?

    - by user1812765
    I have this piece of code: public lot merge (lot otherlot){ wafer[] mWaferarray = new wafer[16]; byte[] bytearray = new byte[16]; wafer resultwafer = new wafer(bytearray); wafer w1; wafer w2; int i; int[][] assignmentmatrix = HungarianAlgorithm.computeAssignments(convertinttofloat (solutionmatrix(otherlot))); for (i=0; i != assignmentmatrix.length ;i++){ w1 = otherlot.getWaferarray()[assignmentmatrix[i][0]]; w2 = getWaferarray()[assignmentmatrix[i][1]]; resultwafer.setWafer(w1.wafercompare(w2)); mWaferarray[i] = resultwafer; mWaferarray[i].print(); } System.out.println("HERE\n"); mWaferarray[5].toString(); resultlot = new lot(mWaferarray); resultlot.print();// Problem occurs here. return resultlot; } As you can see I create an array of wafers (selfdefined class). Then I fill this up with new wafers. When I print this array (mWaferarray[i].print()) it gives me the wanted results. But when I go out of the "for"-loop the array is broken and it is as if the last item I add to mWaferarray fills it up (the entire array, 16 long, is filled with this wafer). So if run this program this is what I get: 1011110010111100 0011011111111110 0111110111101101 1010111001101111 0110110111101111 1010110101111010 1010110111011110 1011111010111100 1111110011101110 0111111111011011 1111111111011010 1101111011111010 1010110101011110 0101111011011010 1011111011011000 0101111011011010 HERE 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 0101111011011010 As you can see it is as if the array is filled with the last wafer. I have been looking at this for some time now, I hope you guy can help me out. Thx in advance PS: my print functions are written like this: void print(){ int j; for (j=0; j != waferarray.length ;j++){ waferarray[j].print(); } } EDIT: added code for lot this is the beginning of the lot class public class lot { wafer[] waferarray = new wafer[16]; lot resultlot; public lot (wafer wafer1,wafer wafer2,wafer wafer3,wafer wafer4, wafer wafer5,wafer wafer6,wafer wafer7,wafer wafer8, wafer wafer9,wafer wafer10,wafer wafer11,wafer wafer12, wafer wafer13,wafer wafer14,wafer wafer15,wafer wafer16){ waferarray[0] = wafer1; waferarray[1] = wafer2; waferarray[2] = wafer3; waferarray[3] = wafer4; waferarray[4] = wafer5; waferarray[5] = wafer6; waferarray[6] = wafer7; waferarray[7] = wafer8; waferarray[8] = wafer9; waferarray[9] = wafer10; waferarray[10] = wafer11; waferarray[11] = wafer12; waferarray[12] = wafer13; waferarray[13] = wafer14; waferarray[14] = wafer15; waferarray[15] = wafer16; } public lot (wafer[] thiswaferarray){ waferarray = thiswaferarray; }

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >