Search Results

Search found 5124 results on 205 pages for 'clean'.

Page 194/205 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Unable to access intel fake RAID 1 array in Fedora 14 after reboot

    - by Sim
    Hello everyone, 1st I am relatively new to linux (but not to *nix). I have 4 disks assembled in the following intel ahci bios fake raid arrays: 2x320GB RAID1 - used for operating systems md126 2x1TB RAID1 - used for data md125 I have used the raid of size 320GB to install my operating system and the second raid I didn't even select during the installation of Fedora 14. After successful partitioning and installation of Fedora, I tried to make the second array available, it was possible to make it visible in linux with mdadm --assembe --scan , after that I created one maximum size partition and 1 maximum size ext4 filesystem in it. Mounted, and used it. After restart - a few I/O errors during boot regarding md125 + inability to mount the filesystem on it and dropped into repair shell. I commented the filesystem in fstab and it booted. To my surprise, the array was marked as "auto read only": [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md125 : active (auto-read-only) raid1 sdc[1] sdd[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdc[1](S) sdd[0](S) 4514 blocks super external:imsm md126 : active raid1 sda[1] sdb[0] 312566784 blocks super external:/md1/0 [2/2] [UU] md1 : inactive sdb[1](S) sda[0](S) 4514 blocks super external:imsm unused devices: <none> [root@localhost ~]# and the partition in it was not available as device special file in /dev: [root@localhost ~]# ls -l /dev/md125* brw-rw---- 1 root disk 9, 125 Jan 6 15:50 /dev/md125 [root@localhost ~]# But the partition is there according to fdisk: [root@localhost ~]# fdisk -l /dev/md125 Disk /dev/md125: 1000.2 GB, 1000202043392 bytes 19 heads, 10 sectors/track, 10281682 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1b238ea9 Device Boot Start End Blocks Id System /dev/md125p1 2048 1953519615 976758784 83 Linux [root@localhost ~]# I tried to "activate" the array in different ways (I'm not experienced with mdadm and the man page is gigantic so I was only browsing it looking for my answer) but it was impossible - the array would still stay in "auto read only" and the device special file for the partition it will not be in /dev. It was only after I recreated the partition via fdisk that it reappeared in /dev... until next reboot. So, my question is - How do I make the array automatically available after reboot? Here is some additional information: 1st I am able to see the UUID of the array in blkid: [root@localhost ~]# blkid /dev/sdc: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/sdd: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/md126p1: UUID="60C8D9A7C8D97C2A" TYPE="ntfs" /dev/md126p2: UUID="3d1b38a3-b469-4b7c-b016-8abfb26a5d7d" TYPE="ext4" /dev/md126p3: UUID="1Msqqr-AAF8-k0wi-VYnq-uWJU-y0OD-uIFBHL" TYPE="LVM2_member" /dev/mapper/vg00-rootlv: LABEL="_Fedora-14-x86_6" UUID="34cc1cf5-6845-4489-8303-7a90c7663f0a" TYPE="ext4" /dev/mapper/vg00-swaplv: UUID="4644d857-e13b-456c-ac03-6f26299c1046" TYPE="swap" /dev/mapper/vg00-homelv: UUID="82bd58b2-edab-4b4b-aec4-b79595ecd0e3" TYPE="ext4" /dev/mapper/vg00-varlv: UUID="1b001444-5fdd-41b6-a59a-9712ec6def33" TYPE="ext4" /dev/mapper/vg00-tmplv: UUID="bf7d2459-2b35-4a1c-9b81-d4c4f24a9842" TYPE="ext4" /dev/md125: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/sda: TYPE="isw_raid_member" /dev/md125p1: UUID="420adfdd-6c4e-4552-93f0-2608938a4059" TYPE="ext4" [root@localhost ~]# Here is how /etc/mdadm.conf looks like: [root@localhost ~]# cat /etc/mdadm.conf # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md1 UUID=89f60dee:e46a251f:7475814b:d4cc19a9 ARRAY /dev/md126 UUID=a8775c90:cee66376:5310fc13:63bcba5b ARRAY /dev/md125 UUID=b9a1149f:ae114fc8:a6000d77:354dc42a [root@localhost ~]# here is how /proc/mdstat looks like after I recreate the partition in the array so that it becomes available: [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md125 : active raid1 sdc[1] sdd[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdc[1](S) sdd[0](S) 4514 blocks super external:imsm md126 : active raid1 sda[1] sdb[0] 312566784 blocks super external:/md1/0 [2/2] [UU] md1 : inactive sdb[1](S) sda[0](S) 4514 blocks super external:imsm unused devices: <none> [root@localhost ~]# Detailed output regarding the array in subject: [root@localhost ~]# mdadm --detail /dev/md125 /dev/md125: Container : /dev/md127, member 0 Raid Level : raid1 Array Size : 976759808 (931.51 GiB 1000.20 GB) Used Dev Size : 976759940 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2 Update Time : Fri Jan 7 00:38:00 2011 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 30ebc3c2:b6a64751:4758d05c:fa8ff782 Number Major Minor RaidDevice State 1 8 32 0 active sync /dev/sdc 0 8 48 1 active sync /dev/sdd [root@localhost ~]# and /etc/fstab, with /data commented (the filesystem that is on this array): # # /etc/fstab # Created by anaconda on Thu Jan 6 03:32:40 2011 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg00-rootlv / ext4 defaults 1 1 UUID=3d1b38a3-b469-4b7c-b016-8abfb26a5d7d /boot ext4 defaults 1 2 #UUID=420adfdd-6c4e-4552-93f0-2608938a4059 /data ext4 defaults 0 1 /dev/mapper/vg00-homelv /home ext4 defaults 1 2 /dev/mapper/vg00-tmplv /tmp ext4 defaults 1 2 /dev/mapper/vg00-varlv /var ext4 defaults 1 2 /dev/mapper/vg00-swaplv swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 [root@localhost ~]# Thanks in advance to everyone that even read this whole issue :-)

    Read the article

  • Domain Validation in a CQRS architecture

    - by Jupaol
    Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion The first approach I considered was: class Customer : EntityBase<Customer> { public void ChangeEmail(string email) { if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”); if(!email.IsEmail()) throw new DomainException(); if(email.Contains(“@mailinator.com”)) throw new DomainException(); } } I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer> { private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver; public bool IsSatisfiedBy(Customer customer) { return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email); } } But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture: class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand> { public void IsValid(Customer entity, ChangeEmailCommand command) { if(!command.Email.HasValidDomain()) throw new DomainException(“...”); } } Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it. Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components... Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options: Pass the validators to each method of my entity Validate my objects externally (from the command handler) I am not happy with the option 1 so I would explain how I would do it with the option 2 class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand> { public void Execute(ChangeEmailCommand command) { private IEnumerable<IDomainInvariantValidator> validators; // here I would get the validators required for this command injected, and in here I would validate them, something like this using (var t = this.unitOfWork.BeginTransaction()) { var customer = this.unitOfWork.Get<Customer>(command.CustomerId); this.validators.ForEach(x =. x.IsValid(customer, command)); // here I know the command is valid // the call to ChangeEmail will fire domain events as needed customer.ChangeEmail(command.Email); t.Commit(); } } } Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation EDIT I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement

    Read the article

  • fd partitions gone from 2 discs, md happy with it and resyncs. How to recover ?

    - by d0nd
    Hey gurus, need some help badly with this one. I run a server with a 6Tb md raid5 volume built over 7*1Tb disks. I've had to shut down the server lately and when it went back up, 2 out of the 7 disks used for the raid volume had lost its conf : dmesg : [ 10.184167] sda: sda1 sda2 sda3 // System disk [ 10.202072] sdb: sdb1 [ 10.210073] sdc: sdc1 [ 10.222073] sdd: sdd1 [ 10.229330] sde: sde1 [ 10.239449] sdf: sdf1 [ 11.099896] sdg: unknown partition table [ 11.255641] sdh: unknown partition table All 7 disks have same geometry and were configured alike : dmesg : Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect All 7 disks (sdb1, sdc1, sdd1, sde1, sdf1, sdg1, sdh1) were used in a md raid5 xfs volume. When booting, md, which was (obviously) out of sync kicked in and automatically started rebuilding over the 7 disks, including the two "faulty" ones; xfs tried to do some shenanigans as well: dmesg : [ 19.566941] md: md0 stopped. [ 19.817038] md: bind<sdc1> [ 19.817339] md: bind<sdd1> [ 19.817465] md: bind<sde1> [ 19.817739] md: bind<sdf1> [ 19.817917] md: bind<sdh> [ 19.818079] md: bind<sdg> [ 19.818198] md: bind<sdb1> [ 19.818248] md: md0: raid array is not clean -- starting background reconstruction [ 19.825259] raid5: device sdb1 operational as raid disk 0 [ 19.825261] raid5: device sdg operational as raid disk 6 [ 19.825262] raid5: device sdh operational as raid disk 5 [ 19.825264] raid5: device sdf1 operational as raid disk 4 [ 19.825265] raid5: device sde1 operational as raid disk 3 [ 19.825267] raid5: device sdd1 operational as raid disk 2 [ 19.825268] raid5: device sdc1 operational as raid disk 1 [ 19.825665] raid5: allocated 7334kB for md0 [ 19.825667] raid5: raid level 5 set md0 active with 7 out of 7 devices, algorithm 2 [ 19.825669] RAID5 conf printout: [ 19.825670] --- rd:7 wd:7 [ 19.825671] disk 0, o:1, dev:sdb1 [ 19.825672] disk 1, o:1, dev:sdc1 [ 19.825673] disk 2, o:1, dev:sdd1 [ 19.825675] disk 3, o:1, dev:sde1 [ 19.825676] disk 4, o:1, dev:sdf1 [ 19.825677] disk 5, o:1, dev:sdh [ 19.825679] disk 6, o:1, dev:sdg [ 19.899787] PM: Starting manual resume from disk [ 28.663228] Filesystem "md0": Disabling barriers, not supported by the underlying device [ 28.663228] XFS mounting filesystem md0 [ 28.884433] md: resync of RAID array md0 [ 28.884433] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 28.884433] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 28.884433] md: using 128k window, over a total of 976759936 blocks. [ 29.025980] Starting XFS recovery on filesystem: md0 (logdev: internal) [ 32.680486] XFS: xlog_recover_process_data: bad clientid [ 32.680495] XFS: log mount/recovery failed: error 5 [ 32.682773] XFS: log mount failed I ran fdisk and flagged sdg1 and sdh1 as fd. I tried to reassemble the array but it didnt work: no matter what was in mdadm.conf, it still uses sdg and sdh instead of sdg1 and sdh1. I checked in /dev and I see no sdg1 and and sdh1, shich explains why it wont use it. I just don't know why those partitions are gone from /dev and how to readd those... blkid : /dev/sda1: LABEL="boot" UUID="519790ae-32fe-4c15-a7f6-f1bea8139409" TYPE="ext2" /dev/sda2: TYPE="swap" /dev/sda3: LABEL="root" UUID="91390d23-ed31-4af0-917e-e599457f6155" TYPE="ext3" /dev/sdb1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdc1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdd1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sde1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdf1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdg: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdh: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" fdisk -l : Disk /dev/sda: 40.0 GB, 40020664320 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8c878c87 Device Boot Start End Blocks Id System /dev/sda1 * 1 12 96358+ 83 Linux /dev/sda2 13 134 979965 82 Linux swap / Solaris /dev/sda3 135 4865 38001757+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc9bdc1e9 Device Boot Start End Blocks Id System /dev/sdc1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xcc356c30 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe87f7a3d Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xb17a2d22 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8f3bce61 Device Boot Start End Blocks Id System /dev/sdg1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xa98062ce Device Boot Start End Blocks Id System /dev/sdh1 1 121601 976760001 fd Linux raid autodetect I really dont know what happened nor how to recover from this mess. Needless to say the 5TB or so worth of data sitting on those disks are very valuable to me... Any idea any one? Did anybody ever experienced a similar situation or know how to recover from it ? Can someone help me? I'm really desperate... :x

    Read the article

  • VS2008 Windows Form Designer does not like my control.

    - by Thedric Walker
    I have a control that is created like so: public partial class MYControl : MyControlBase { public string InnerText { get { return textBox1.Text; } set { textBox1.Text = value; } } public MYControl() { InitializeComponent(); } } partial class MYControl { /// <summary> /// Required designer variable. /// </summary> private System.ComponentModel.IContainer components = null; /// <summary> /// Clean up any resources being used. /// </summary> /// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param> protected override void Dispose(bool disposing) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } #region Component Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { this.textBox1 = new System.Windows.Forms.TextBox(); this.listBox1 = new System.Windows.Forms.ListBox(); this.label1 = new System.Windows.Forms.Label(); this.SuspendLayout(); // // textBox1 // this.textBox1.Location = new System.Drawing.Point(28, 61); this.textBox1.Name = "textBox1"; this.textBox1.Size = new System.Drawing.Size(100, 20); this.textBox1.TabIndex = 0; // // listBox1 // this.listBox1.FormattingEnabled = true; this.listBox1.Location = new System.Drawing.Point(7, 106); this.listBox1.Name = "listBox1"; this.listBox1.Size = new System.Drawing.Size(120, 95); this.listBox1.TabIndex = 1; // // label1 // this.label1.AutoSize = true; this.label1.Location = new System.Drawing.Point(91, 42); this.label1.Name = "label1"; this.label1.Size = new System.Drawing.Size(35, 13); this.label1.TabIndex = 2; this.label1.Text = "label1"; // // MYControl // this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F); this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font; this.Controls.Add(this.label1); this.Controls.Add(this.listBox1); this.Controls.Add(this.textBox1); this.Name = "MYControl"; this.Size = new System.Drawing.Size(135, 214); this.ResumeLayout(false); this.PerformLayout(); } #endregion private System.Windows.Forms.Label label1; } MyControlBase contains the definition for the ListBox and TextBox. Now when I try to view this control in the Form Designer it gives me these errors: The variable 'listBox1' is either undeclared or was never assigned. The variable 'textBox1' is either undeclared or was never assigned. This is obviously wrong as they are defined in MyControlBase with public access. Is there any way to massage Form Designer into allowing me to visually edit my control?

    Read the article

  • ASP.NET- using System.IO.File.Delete() to delete file(s) from directory inside wwwroot?

    - by Jim S
    Hello, I have a ASP.NET SOAP web service whose web method creates a PDF file, writes it to the "Download" directory of the applicaton, and returns the URL to the user. Code: //Create the map images (MapPrinter) and insert them on the PDF (PagePrinter). MemoryStream mstream = null; FileStream fs = null; try { //Create the memorystream storing the pdf created. mstream = pgPrinter.GenerateMapImage(); //Convert the memorystream to an array of bytes. byte[] byteArray = mstream.ToArray(); //return byteArray; //Save PDF file to site's Download folder with a unique name. System.Text.StringBuilder sb = new System.Text.StringBuilder(Global.PhysicalDownloadPath); sb.Append("\\"); string fileName = Guid.NewGuid().ToString() + ".pdf"; sb.Append(fileName); string filePath = sb.ToString(); fs = new FileStream(filePath, FileMode.CreateNew); fs.Write(byteArray, 0, byteArray.Length); string requestURI = this.Context.Request.Url.AbsoluteUri; string virtPath = requestURI.Remove(requestURI.IndexOf("Service.asmx")) + "Download/" + fileName; return virtPath; } catch (Exception ex) { throw new Exception("An error has occurred creating the map pdf.", ex); } finally { if (mstream != null) mstream.Close(); if (fs != null) fs.Close(); //Clean up resources if (pgPrinter != null) pgPrinter.Dispose(); } Then in the Global.asax file of the web service, I set up a Timer in the Application_Start event listener. In the Timer's ElapsedEvent listener I look for any files in the Download directory that are older than the Timer interval (for testing = 1 min., for deployment ~20 min.) and delete them. Code: //Interval to check for old files (milliseconds), also set to delete files older than now minus this interval. private static double deleteTimeInterval; private static System.Timers.Timer timer; //Physical path to Download folder. Everything in this folder will be checked for deletion. public static string PhysicalDownloadPath; void Application_Start(object sender, EventArgs e) { // Code that runs on application startup deleteTimeInterval = Convert.ToDouble(System.Configuration.ConfigurationManager.AppSettings["FileDeleteInterval"]); //Create timer with interval (milliseconds) whose elapse event will trigger the delete of old files //in the Download directory. timer = new System.Timers.Timer(deleteTimeInterval); timer.Enabled = true; timer.AutoReset = true; timer.Elapsed += new System.Timers.ElapsedEventHandler(OnTimedEvent); PhysicalDownloadPath = System.Web.Hosting.HostingEnvironment.ApplicationPhysicalPath + "Download"; } private static void OnTimedEvent(object source, System.Timers.ElapsedEventArgs e) { //Delete the files older than the time interval in the Download folder. var folder = new System.IO.DirectoryInfo(PhysicalDownloadPath); System.IO.FileInfo[] files = folder.GetFiles(); foreach (var file in files) { if (file.CreationTime < DateTime.Now.AddMilliseconds(-deleteTimeInterval)) { string path = PhysicalDownloadPath + "\\" + file.Name; System.IO.File.Delete(path); } } } This works perfectly, with one exception. When I publish the web service application to inetpub\wwwroot (Windows 7, IIS7) it does not delete the old files in the Download directory. The app works perfect when I publish to IIS from a physical directory not in wwwroot. Obviously, it seems IIS places some sort of lock on files in the web root. I have tested impersonating an admin user to run the app and it still does not work. Any tips on how to circumvent the lock programmatically when in wwwroot? The client will probably want the app published to the root directory. Thank you very much.

    Read the article

  • Why are there so many man-made edge cases in IT, and is there any hope for simplification / unificat

    - by Hamish Grubijan
    This question is meant to generate discussion and so it is marked as community wiki. My observation is that the field of information technology grows so rapidly and randomly, that for many it takes a lot of time to learn many intricacies of some tools that will be obsolete in just short 3 years. If you look at the questions asked on StackOverflow ... at least half of them stem from the fact that some language / tool / API / protocol was poorly designed, is backwards and has gotchas. There are so many things which distract developers from converting English into machine code; instead they spend their time configuring stuff and gluing together things that do not really fit. How many times do you pick up somebody else's project (or someone picks up yours :) ) and realize that this program does not need half of the dialogs that it has, and that the logic can be simplified a great deal? But, it had to be made and sold here before a better thing is made and sold elsewhere, and hence all this rush. I often wish that things would just slow down. I do not want Microsoft Windows to run on my car's computer, my watch, my table, my toaster oven, and my toilet seat. I'd rather have Windows that DOES NOT HAVE WINDOWS REGISTRY, I'd rather have Windows that allows two different programs to work on the same file at the same time, the way it works on Unix systems. Microsoft is just an example. I am looking forward to the day when I do not have to worry about Windows vs Unix new line break, when System32 actually means that this directory contains 32-bit binaries, and not 64-bit ones, the day when dll hell and manifest hell are no longer an issue, the day when it takes me a lot less than 3 months on a new job to learn the system. I do not mean learning the entire code base of a product (depending on the size of it, it can take a long time). I mean - remembering which build-assisting scripts are written in Perl and which version of it, and which ones are done through .bat files, when do I need to manually make every file in some directory writable before running a script, or else a critical step of a database maintenance home-grown tool will bomb, and it will take 2 days to clean that up. Makes me wonder if humans enslaved computers, or if it is the other way around. The key is that improving those things will not bring extra revenue, and hence those taking the time to fix crap like that are not "business focused". However, these imperfections irritate me immensely, particularly because my memory is limited - I can hold only a small portion of that useless knowledge of a system in my head at any given point in time. I must not be alone. Did you also happen to notice that a programmer can waste a lot of time on things that should have been a lot more straight-forward? Is there hope? Will things get better/simpler in the future, or will there be a lot more IT crap floating around? I suppose I see diversity of tools, protocols, etc. as a bad thing. Thank you for participation.

    Read the article

  • Design Question - how do you break the dependency between classes using an interface?

    - by Seth Spearman
    Hello, I apologize in advance but this will be a long question. I'm stuck. I am trying to learn unit testing, C#, and design patterns - all at once. (Maybe that's my problem.) As such I am reading the Art of Unit Testing (Osherove), and Clean Code (Martin), and Head First Design Patterns (O'Reilly). I am just now beginning to understand delegates and events (which you would see if you were to troll my SO questions of recent). I still don't quite get lambdas. To contextualize all of this I have given myself a learning project I am calling goAlarms. I have an Alarm class with members you'd expect (NextAlarmTime, Name, AlarmGroup, Event Trigger etc.) I wanted the "Timer" of the alarm to be extensible so I created an IAlarmScheduler interface as follows... public interface AlarmScheduler { Dictionary<string,Alarm> AlarmList { get; } void Startup(); void Shutdown(); void AddTrigger(string triggerName, string groupName, Alarm alarm); void RemoveTrigger(string triggerName); void PauseTrigger(string triggerName); void ResumeTrigger(string triggerName); void PauseTriggerGroup(string groupName); void ResumeTriggerGroup(string groupName); void SetSnoozeTrigger(string triggerName, int duration); void SetNextOccurrence (string triggerName, DateTime nextOccurrence); } This IAlarmScheduler interface define a component that will RAISE an alarm (Trigger) which will bubble up to my Alarm class and raise the Trigger Event of the alarm itself. It is essentially the "Timer" component. I have found that the Quartz.net component is perfectly suited for this so I have created a QuartzAlarmScheduler class which implements IAlarmScheduler. All that is fine. My problem is that the Alarm class is abstract and I want to create a lot of different KINDS of alarm. For example, I already have a Heartbeat alarm (triggered every (int) interval of minutes), AppointmentAlarm (triggered on set date and time), Daily Alarm (triggered every day at X) and perhaps others. And Quartz.NET is perfectly suited to handle this. My problem is a design problem. I want to be able to instantiate an alarm of any kind without my Alarm class (or any derived classes) knowing anything about Quartz. The problem is that Quartz has awesome factories that return just the right setup for the Triggers that will be needed by my Alarm classes. So, for example, I can get a Quartz trigger by using TriggerUtils.MakeMinutelyTrigger to create a trigger for the heartbeat alarm described above. Or TriggerUtils.MakeDailyTrigger for the daily alarm. I guess I could sum it up this way. Indirectly or directly I want my alarm classes to be able to consume the TriggerUtils.Make* classes without knowing anything about them. I know that is a contradiction, but that is why I am asking the question. I thought about putting a delegate field into the alarm which would be assigned one of these Make method but by doing that I am creating a hard dependency between alarm and Quartz which I want to avoid for both unit testing purposes and design purposes. I thought of using a switch for the type in QuartzAlarmScheduler per here but I know it is bad design and I am trying to learn good design. If I may editorialize a bit. I've decided that coding (predefined) classes is easy. Design is HARD...in fact, really hard and I am really fighting feeling stupid right now. I guess I want to know if you really smart people took a while to really understand and master this stuff or should I feel stupid (as I do) because I haven't grasped it better in the couple of weeks/months I have been studying. You guys are awesome and thanks in advance for your answers. Seth

    Read the article

  • Tomcat 7 on Ubuntu 12.04 with JRE 7 not starting

    - by Andreas Krueger
    I am running a virtual server in the web on Ubuntu 12.04 LTS / 32 Bit. After a clean install of JRE 7 and Tomcat 7, following the instructions on http://www.sysadminslife.com, I don't get Tomcat 7 up and running. > java -version java version "1.7.0_09" Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) Client VM (build 23.5-b02, mixed mode) > /etc/init.d/tomcat start Starting Tomcat Using CATALINA_BASE: /usr/local/tomcat Using CATALINA_HOME: /usr/local/tomcat Using CATALINA_TMPDIR: /usr/local/tomcat/temp Using JRE_HOME: /usr/lib/jvm/java-7-oracle Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar > telnet localhost 8080 Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused netstat sometimes shows a Java process, most of the times not. If it does, nothing works either. Does anyone have a solution or encountered similar situations? Here are the contents of catalina.out: 16.11.2012 18:36:39 org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/lib/jvm/java-6-oracle/lib/i386/client:/usr/lib/jvm/java-6-oracle/lib/i386:/usr/lib/jvm/java-6-oracle/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib 16.11.2012 18:36:40 org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] 16.11.2012 18:36:40 org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-bio-8009"] 16.11.2012 18:36:40 org.apache.catalina.startup.Catalina load INFO: Initialization processed in 1509 ms 16.11.2012 18:36:40 org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina 16.11.2012 18:36:40 org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.29 16.11.2012 18:36:40 org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /usr/local/tomcat/webapps/manager Here come the results of ps -ef, iptables --list and netstat -plut: > ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 Nov16 ? 00:00:00 init root 2 1 0 Nov16 ? 00:00:00 [kthreadd/206616] root 3 2 0 Nov16 ? 00:00:00 [khelper/2066167] root 4 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 5 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 6 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 7 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 8 2 0 Nov16 ? 00:00:00 [nfsiod/2066167] root 119 1 0 Nov16 ? 00:00:00 upstart-udev-bridge --daemon root 125 1 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 157 125 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 158 125 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 205 1 0 Nov16 ? 00:00:00 upstart-socket-bridge --daemon root 276 1 0 Nov16 ? 00:00:00 /usr/sbin/sshd -D root 335 1 0 Nov16 ? 00:00:00 /usr/sbin/xinetd -dontfork -pidfile /var/run/xinetd.pid -stayalive -inetd root 348 1 0 Nov16 ? 00:00:00 cron syslog 368 1 0 Nov16 ? 00:00:00 /sbin/syslogd -u syslog root 472 1 0 Nov16 ? 00:00:00 /usr/lib/postfix/master postfix 482 472 0 Nov16 ? 00:00:00 qmgr -l -t fifo -u root 520 1 0 Nov16 ? 00:00:04 /usr/sbin/apache2 -k start www-data 523 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start www-data 525 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start www-data 526 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start tomcat 1074 1 0 Nov16 ? 00:01:08 /usr/lib/jvm/java-6-oracle/bin/java -Djava.util.logging.config.file=/usr/ postfix 1351 472 0 Nov16 ? 00:00:00 tlsmgr -l -t unix -u -c postfix 3413 472 0 17:00 ? 00:00:00 pickup -l -t fifo -u -c root 3457 276 0 17:31 ? 00:00:00 sshd: root@pts/0 root 3459 3457 0 17:31 pts/0 00:00:00 -bash root 3470 3459 0 17:31 pts/0 00:00:00 ps -ef > iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt ACCEPT tcp -- anywhere anywhere tcp dpt:8005 ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination > netstat -plut Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:smtp *:* LISTEN 472/master tcp 0 0 *:3213 *:* LISTEN 276/sshd tcp6 0 0 [::]:smtp [::]:* LISTEN 472/master tcp6 0 0 [::]:8009 [::]:* LISTEN 1074/java tcp6 0 0 [::]:3213 [::]:* LISTEN 276/sshd tcp6 0 0 [::]:http-alt [::]:* LISTEN 1074/java tcp6 0 0 [::]:http [::]:* LISTEN 520/apache2

    Read the article

  • Is this a right way to use NHibernate?

    - by Venemo
    I spent the rest of the evening reading StackOverflow questions and also some blog entries and links about the subject. All of them turned out to be very helpful, but I still feel that they don't really answer my question. So, I'm developing a simple web application. I'd like to create a reusable data access layer which I can later reuse in other solutions. 99% of these will be web applications. This seems to be a good excuse for me to learn NHibernate and some of the patterns around it. My goals are the following: I don't want the business logic layer to know ANYTHING about the inner workings of the database, nor NHibernate itself. I want the business logic layer to have the least possible number of assumptions about the data access layer. I want the data access layer as simplistic and easy-to-use as possible. This is going to be a simple project, so I don't want to overcomplicate anything. I want the data access layer to be as non-intrusive as possible. Will all this in mind, I decided to use the popular repository pattern. I read about this subject on this site and on various dev blogs, and I heard some stuff about the unit of work pattern. I also looked around and checked out various implementations. (Including FubuMVC contrib, and SharpArchitecture, and stuff on some blogs.) I found out that most of these operate with the same principle: They create a "unit of work" which is instantiated when a repository is instantiated, they start a transaction, do stuff, and commit, and then start all over again. So, only one ISession per Repository and that's it. Then the client code needs to instantiate a repository, do stuff with it, and then dispose. This usage pattern doesn't meet my need of being as simplistic as possible, so I began thinking about something else. I found out that NHibernate already has something which makes custom "unit of work" implementations unnecessary, and that is the CurrentSessionContext class. If I configure the session context correctly, and do the clean up when necessary, I'm good to go. So, I came up with this: I have a static class called NHibernateHelper. Firstly, it has a static property called CurrentSessionFactory, which upon first call, instantiates a session factory and stores it in a static field. (One ISessionFactory per one AppDomain is good enough.) Then, more importantly, it has a CurrentSession static property, which checks if there is an ISession bound to the current session context, and if not, creates one, and binds it, and it returns with the ISession bound to the current session context. Because it will be used mostly with WebSessionContext (so, one ISession per HttpRequest, although for the unit tests, I configured ThreadStaticSessionContext), it should work seamlessly. And after creating and binding an ISession, it hooks an event handler to the HttpContext.Current.ApplicationInstance.EndRequest event, which takes care of cleaning up the ISession after the request ends. (Of course, it only does this if it is really running in a web environment.) So, with all this set up, the NHibernateHelper will always be able to return a valid ISession, so there is no need to instantiate a Repository instance for the "unit of work" to operate properly. Instead, the Repository is a static class which operates with the ISession from the NHibernateHelper.CurrentSession property, and exposes some functionality through that. I'm curious, what do you think about this? Is it a valid way of thinking, or am I completely off track here?

    Read the article

  • can you simlify and generalize this useful jQuery function?

    - by user199368
    Hi, I'm doing an eshop with goods displayed as "tiles" in grid as usual. I just want to use various sizes of tiles and make sure (via jQuery) there are no free spaces. In basic situation, I have a 960px wrapper and want to use 240x180px (class .grid_4) tiles and 480x360px (class .grid_8) tiles. See image (imagine no margins/paddings there): Problems without jQuery: - when the CMS provides the big tile as 6th, there would be a free space under the 5th one - when the CMS provides the big tile as 7th, there would be a free space under 5th and 6th - when the CMS provides the big tile as 8th, it would shift to next line, leaving position no.8 free My solution so far looks like this: $(".grid_8").each(function(){ //console.log("BIG on position "+($(this).index()+1)+" which is "+(($(this).index()+1)%2?"ODD":"EVEN")); switch (($(this).index()+1)%4) { case 1: // nothing needed //console.log("case 1"); break; case 2: //need to shift one position and wrap into 240px div //console.log("case 2"); $(this).insertAfter($(this).next()); //swaps this with next $(this).prevAll(":nth(0), :nth(1)").wrapAll("<div class=\"grid_4\" />"); break; case 3: //need to shift two positions and wrap into 480px div //console.log("case 3"); $(this).prevAll(":nth(0), :nth(1)").wrapAll("<div class=\"grid_4\" />"); //wraps previous two - forcing them into column $(this).nextAll(":nth(0), :nth(1)").wrapAll("<div class=\"grid_4\" />"); //wraps next two - forcing them into column $(this).insertAfter($(this).next()); //moves behind the second column break; case 0: //need to shift one position //console.log("case 4"); $(this).insertAfter($(this).next()); //console.log("shifted to next line"); break; } }); It should be obvious from the comments how it works - generally always makes sure that the big tile is on odd position (count of preceding small tiles is even) by shifting one position back if needed. Also small tiles to the left from the big one need to be wrapped in another div so that they appear in column rather than row. Now finally the questions: how to generalize the function so that I can use even more tile dimensions like 720x360 (3x2), 480x540 (2x3), etc.? is there a way to simplify the function? I need to make sure that big tile counts as a multiple of small tiles when checking the actual position. Because using index() on the tile on position 12 (last tile in 3rd row) would now return 7 (position 8) because tiles on positions 5 and 9 are wrapped together in one culumn and the big tile is also just a single div, but spans 2x2 positions. any clean way to ensure this? Thank you very much for any hints. Feel free to reuse the code, I think it can be useful. Josef

    Read the article

  • I'm looking for this graph solution

    - by Ben Fransen
    Hello all, Recently I found a pretty graph when I was browsing the adminskins at ThemeForest and in one of the templates I found a really nice, smooth, clean graph. But I've searching arround but so far without luck finding out which solution is used. And example can be found at: http://enstyled.com/adminus/original/page.html The source of this graph looks like: <table class="stats bar" cellpadding="0" cellspacing="0" width="100%"> <thead> <tr> <td>&nbsp;</td> <th scope="col">02.09</th> <th scope="col">03.09</th> <th scope="col">04.09</th> <th scope="col">05.09</th> <th scope="col">06.09</th> <th scope="col">07.09</th> <th scope="col">08.09</th> <th scope="col">09.09</th> <th scope="col">10.09</th> <th scope="col">11.09</th> <th scope="col">12.09</th> <th scope="col">01.10</th> <th scope="col">02.10</th> <th scope="col">03.10</th> </tr> </thead> <tbody> <tr> <th scope="row">Page views</th> <td>1800</td> <td>900</td> <td>700</td> <td>1200</td> <td>600</td> <td>2800</td> <td>3200</td> <td>500</td> <td>2200</td> <td>1000</td> <td>1200</td> <td>700</td> <td>650</td> <td>800</td> </tr> <tr> <th scope="row">Unique visitors</th> <td>1600</td> <td>650</td> <td>550</td> <td>900</td> <td>500</td> <td>2300</td> <td>2700</td> <td>350</td> <td>1700</td> <td>600</td> <td>1000</td> <td>500</td> <td>400</td> <td>700</td> </tr> </tbody> </table> Can someone please tell me which graphingsolution is used here? Thanks in advance, Ben Fransen

    Read the article

  • Help to restructure my Doc/View more correctly

    - by Harvey
    Edited by OP. My program is in need of a lot of cleanup and restructuring. In another post I asked about leaving the MFC DocView framework and going to the WinProc & Message Loop way (what is that called for short?). Well at present I am thinking that I should clean up what I have in Doc View and perhaps later convert to non-MFC it that even makes sense. My Document class currently has almost nothing useful in it. I think a place to start is the InitInstance() function (posted below). In this part: POSITION pos=pDocTemplate->GetFirstDocPosition(); CLCWDoc *pDoc=(CLCWDoc *)pDocTemplate->GetNextDoc(pos); ASSERT_VALID(pDoc); POSITION vpos=pDoc->GetFirstViewPosition(); CChildView *pCV=(CChildView *)pDoc->GetNextView(vpos); This seem strange to me. I only have one doc and one view. I feel like I am going about it backwards with GetNextDoc() and GetNextView(). To try to use a silly analogy; it's like I have a book in my hand but I have to look up in it's index to find out what page the Title of the book is on. I'm tired of feeling embarrassed about my code. I either need correction or reassurance, or both. :) Also, all the miscellaneous items are in no particular order. I would like to rearrange them into an order that may be more standard, structured or straightforward. ALL suggestions welcome! BOOL CLCWApp::InitInstance() { InitCommonControls(); if(!AfxOleInit()) return FALSE; // Initialize the Toolbar dll. (Toolbar code by Nikolay Denisov.) InitGuiLibDLL(); // NOTE: insert GuiLib.dll into the resource chain SetRegistryKey(_T("Real Name Removed")); // Register document templates CSingleDocTemplate* pDocTemplate; pDocTemplate = new CSingleDocTemplate( IDR_MAINFRAME, RUNTIME_CLASS(CLCWDoc), RUNTIME_CLASS(CMainFrame), RUNTIME_CLASS(CChildView)); AddDocTemplate(pDocTemplate); // Parse command line for standard shell commands, DDE, file open CCmdLineInfo cmdInfo; ParseCommandLine(cmdInfo); // Dispatch commands specified on the command line // The window frame appears on the screen in here. if (!ProcessShellCommand(cmdInfo)) { AfxMessageBox("Failure processing Command Line"); return FALSE; } POSITION pos=pDocTemplate->GetFirstDocPosition(); CLCWDoc *pDoc=(CLCWDoc *)pDocTemplate->GetNextDoc(pos); ASSERT_VALID(pDoc); POSITION vpos=pDoc->GetFirstViewPosition(); CChildView *pCV=(CChildView *)pDoc->GetNextView(vpos); if(!cmdInfo.m_Fn1.IsEmpty() && !cmdInfo.m_Fn2.IsEmpty()) { pCV->OpenF1(cmdInfo.m_Fn1); pCV->OpenF2(cmdInfo.m_Fn2); pCV->DoCompare(); // Sends a paint message when complete } // enable file manager drag/drop and DDE Execute open m_pMainWnd->DragAcceptFiles(TRUE); m_pMainWnd->ShowWindow(SW_SHOWNORMAL); m_pMainWnd->UpdateWindow(); // paints the window background pCV->bDoSize=true; //Prevent a dozen useless size calculations return TRUE; } Thanks

    Read the article

  • N-tier Repository POCOs - Aggregates?

    - by Sam
    Assume the following simple POCOs, Country and State: public partial class Country { public Country() { States = new List<State>(); } public virtual int CountryId { get; set; } public virtual string Name { get; set; } public virtual string CountryCode { get; set; } public virtual ICollection<State> States { get; set; } } public partial class State { public virtual int StateId { get; set; } public virtual int CountryId { get; set; } public virtual Country Country { get; set; } public virtual string Name { get; set; } public virtual string Abbreviation { get; set; } } Now assume I have a simple respository that looks something like this: public partial class CountryRepository : IDisposable { protected internal IDatabase _db; public CountryRepository() { _db = new Database(System.Configuration.ConfigurationManager.AppSettings["DbConnName"]); } public IEnumerable<Country> GetAll() { return _db.Query<Country>("SELECT * FROM Countries ORDER BY Name", null); } public Country Get(object id) { return _db.SingleById(id); } public void Add(Country c) { _db.Insert(c); } /* ...And So On... */ } Typically in my UI I do not display all of the children (states), but I do display an aggregate count. So my country list view model might look like this: public partial class CountryListVM { [Key] public int CountryId { get; set; } public string Name { get; set; } public string CountryCode { get; set; } public int StateCount { get; set; } } When I'm using the underlying data provider (Entity Framework, NHibernate, PetaPoco, etc) directly in my UI layer, I can easily do something like this: IList<CountryListVM> list = db.Countries .OrderBy(c => c.Name) .Select(c => new CountryListVM() { CountryId = c.CountryId, Name = c.Name, CountryCode = c.CountryCode, StateCount = c.States.Count }) .ToList(); But when I'm using a repository or service pattern, I abstract away direct access to the data layer. It seems as though my options are to: Return the Country with a populated States collection, then map over in the UI layer. The downside to this approach is that I'm returning a lot more data than is actually needed. -or- Put all my view models into my Common dll library (as opposed to having them in the Models directory in my MVC app) and expand my repository to return specific view models instead of just the domain pocos. The downside to this approach is that I'm leaking UI specific stuff (MVC data validation annotations) into my previously clean POCOs. -or- Are there other options? How are you handling these types of things?

    Read the article

  • How to write a bison grammer for WDI?

    - by Rizo
    I need some help in bison grammar construction. From my another question: I'm trying to make a meta-language for writing markup code (such as xml and html) wich can be directly embedded into C/C++ code. Here is a simple sample written in this language, I call it WDI (Web Development Interface): /* * Simple wdi/html sample source code */ #include <mySite> string name = "myName"; string toCapital(string str); html { head { title { mySiteTitle; } link(rel="stylesheet", href="style.css"); } body(id="default") { // Page content wrapper div(id="wrapper", class="some_class") { h1 { "Hello, " + toCapital(name) + "!"; } // Lists post ul(id="post_list") { for(post in posts) { li { a(href=post.getID()) { post.tilte; } } } } } } } Basically it is a C source with a user-friendly interface for html. As you can see the traditional tag-based style is substituted by C-like, with blocks delimited by curly braces. I need to build an interpreter to translate this code to html and posteriorly insert it into C, so that it can be compiled. The C part stays intact. Inside the wdi source it is not necessary to use prints, every return statement will be used for output (in printf function). The program's output will be clean html code. So, for example a heading 1 tag would be transformed like this: h1 { "Hello, " + toCapital(name) + "!"; } // would become: printf("<h1>Hello, %s!</h1>", toCapital(name)); My main goal is to create an interpreter to translate wdi source to html like this: tag(attributes) {content} = <tag attributes>content</tag> Secondly, html code returned by the interpreter has to be inserted into C code with printfs. Variables and functions that occur inside wdi should also be sorted in order to use them as printf parameters (the case of toCapital(name) in sample source). Here are my flex/bison files: id [a-zA-Z_]([a-zA-Z0-9_])* number [0-9]+ string \".*\" %% {id} { yylval.string = strdup(yytext); return(ID); } {number} { yylval.number = atoi(yytext); return(NUMBER); } {string} { yylval.string = strdup(yytext); return(STRING); } "(" { return(LPAREN); } ")" { return(RPAREN); } "{" { return(LBRACE); } "}" { return(RBRACE); } "=" { return(ASSIGN); } "," { return(COMMA); } ";" { return(SEMICOLON); } \n|\r|\f { /* ignore EOL */ } [ \t]+ { /* ignore whitespace */ } . { /* return(CCODE); Find C source */ } %% %start wdi %token LPAREN RPAREN LBRACE RBRACE ASSIGN COMMA SEMICOLON CCODE QUOTE %union { int number; char *string; } %token <string> ID STRING %token <number> NUMBER %% wdi : /* empty */ | blocks ; blocks : block | blocks block ; block : head SEMICOLON | head body ; head : ID | ID attributes ; attributes : LPAREN RPAREN | LPAREN attribute_list RPAREN ; attribute_list : attribute | attribute COMMA attribute_list ; attribute : key ASSIGN value ; key : ID {$$=$1} ; value : STRING {$$=$1} /*| NUMBER*/ /*| CCODE*/ ; body : LBRACE content RBRACE ; content : /* */ | blocks | STRING SEMICOLON | NUMBER SEMICOLON | CCODE ; %% I am having difficulties on defining a proper grammar for the language, specially in splitting WDI and C code . I just started learning language processing techniques so I need some orientation. Could someone correct my code or give some examples of what is the right way to solve this problem?

    Read the article

  • WPF Issues with Control Layout

    - by Brett Powell
    I am making an application that connects to our billing software using its API, and I am running into a few issues getting the layout working properly. I want to make it so that when one of the expanders is minimized, the other window fills the gap, and when it is expanded again the other expander goes back to where it was. Right now when the arrow is clicked on one, there is just an empty gap. I used a DockPanel as the parent which I assumed would automatically do this, but it isn't working. Second question, is there a way to make these areas resizable? I don't want to try and get too frisky with allowing the user to undock the menus (don't even know if that is possible with just straight WPF) but it would be nice if they could change the width/height of them. Also, just a newbie question to C#, but what is the equivalent of a C++ header file? It looks like you just use .cs files, but I am not sure. I want to extract all of my functions that pull the data from the billing software and put them into a different file to clean up the code. Here is my XAML... <Window x:Class="WpfApplication3.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Billing Management" Height="550" Width="754" xmlns:shared="http://schemas.actiprosoftware.com/winfx/xaml/shared" WindowStartupLocation="CenterScreen" WindowStyle="ThreeDBorderWindow"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="22" /> <RowDefinition /> </Grid.RowDefinitions> <Menu Height="22" Name="menu1" Margin="0" HorizontalAlignment="Stretch" VerticalAlignment="Top" HorizontalContentAlignment="Left" IsEnabled="True" IsMainMenu="True"> <MenuItem Header="_File"> <MenuItem Header="_Open" /> <MenuItem Header="_Close" /> <Separator/> <MenuItem Header="_Exit" /> </MenuItem> </Menu> <TabControl Name="tabControl1" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" BorderThickness="1" Padding="0" TabStripPlacement="Bottom" UseLayoutRounding="False" FlowDirection="LeftToRight" Grid.Row="1"> <TabItem Header="Main" Name="tabItem1" Margin="0"> <DockPanel Name="dockPanel1" LastChildFill="True"> <ListBox Height="100" Name="listBox3" DockPanel.Dock="Top" /> <ListBox Name="listBox4" Width="200" DockPanel.Dock="Right" /> <DockPanel Height="Auto" Name="dockPanel2" Width="Auto" VerticalAlignment="Stretch" LastChildFill="True"> <shared:AnimatedExpander Header="Staff Online" Width="200" Name="expanderStaffOnline" IsExpanded="True" Height="194" BorderThickness="0" DockPanel.Dock="Top" VerticalContentAlignment="Stretch"> <ListBox Name="listboxStaffOnline" Width="Auto" Height="Auto" Margin="0" VerticalAlignment="Stretch" Loaded="listboxStaffOnline_Loaded" /> </shared:AnimatedExpander> <shared:AnimatedExpander Header="Test Menu 2" Height="Auto" Name="animatedExpander1" BorderThickness="1" Margin="0,0,0,0" IsExpanded="True" VerticalContentAlignment="Stretch"> <ListBox Height="Auto" HorizontalAlignment="Stretch" Name="listBox6" VerticalAlignment="Stretch" Margin="0" BorderThickness="1" /> </shared:AnimatedExpander> </DockPanel> <ListBox Height="100" Name="listboxAdminLogs" DockPanel.Dock="Bottom" Loaded="listboxAdminLogs_Loaded" /> <ListBox Name="listBox5" /> </DockPanel> </TabItem> <TabItem Header="Support" Name="tabItem2" Margin="0"> </TabItem> <TabItem Header="Clients" /> <TabItem Header="Billing" /> <TabItem Header="Orders" /> </TabControl> </Grid> </Window>

    Read the article

  • Game AI: Pattern for implementing Sense-Think-Act components?

    - by Rosarch
    I'm developing a game. Each entity in the game is a GameObject. Each GameObject is composed of a GameObjectController, GameObjectModel, and GameObjectView. (Or inheritants thereof.) For NPCs, the GameObjectController is split into: IThinkNPC: reads current state and makes a decision about what to do IActNPC: updates state based on what needs to be done ISenseNPC: reads current state to answer world queries (eg "am I being in the shadows?") My question: Is this ok for the ISenseNPC interface? public interface ISenseNPC { // ... /// <summary> /// True if `dest` is a safe point to which to retreat. /// </summary> /// <param name="dest"></param> /// <param name="angleToThreat"></param> /// <param name="range"></param> /// <returns></returns> bool IsSafeToRetreat(Vector2 dest, float angleToThreat, float range); /// <summary> /// Finds a new location to which to retreat. /// </summary> /// <param name="angleToThreat"></param> /// <returns></returns> Vector2 newRetreatDest(float angleToThreat); /// <summary> /// Returns the closest LightSource that illuminates the NPC. /// Null if the NPC is not illuminated. /// </summary> /// <returns></returns> ILightSource ClosestIlluminatingLight(); /// <summary> /// True if the NPC is sufficiently far away from target. /// Assumes that target is the only entity it could ever run from. /// </summary> /// <returns></returns> bool IsSafeFromTarget(); } None of the methods take any parameters. Instead, the implementation is expected to maintain a reference to the relevant GameObjectController and read that. However, I'm now trying to write unit tests for this. Obviously, it's necessary to use mocking, since I can't pass arguments directly. The way I'm doing it feels really brittle - what if another implementation comes along that uses the world query utilities in a different way? Really, I'm not testing the interface, I'm testing the implementation. Poor. The reason I used this pattern in the first place was to keep IThinkNPC implementation code clean: public BehaviorState RetreatTransition(BehaviorState currentBehavior) { if (sense.IsCollidingWithTarget()) { NPCUtils.TraceTransitionIfNeeded(ToString(), BehaviorState.ATTACK.ToString(), "is colliding with target"); return BehaviorState.ATTACK; } if (sense.IsSafeFromTarget() && sense.ClosestIlluminatingLight() == null) { return BehaviorState.WANDER; } if (sense.ClosestIlluminatingLight() != null && sense.SeesTarget()) { NPCUtils.TraceTransitionIfNeeded(ToString(), BehaviorState.ATTACK.ToString(), "collides with target"); return BehaviorState.CHASE; } return currentBehavior; } Perhaps the cleanliness isn't worth it, however. So, if ISenseNPC takes all the params it needs every time, I could make it static. Is there any problem with that?

    Read the article

  • WebClient.DownloadDataAsync is freezing my UI

    - by Matías
    Hi, I have in my Form constructor, after the InitializeComponent the following code: using (WebClient client = new WebClient()) { client.DownloadDataCompleted += new DownloadDataCompletedEventHandler(client_DownloadDataCompleted); client.DownloadDataAsync("http://example.com/version.txt"); } When I start my form, the UI doesn't appears till client_DownloadDataCompleted is raised. The client_DownloadDataCompleted method is empty, so there's no problem there. What I'm doing wrong? How is supposed to do this without freezing the UI? Thanks for your time. Best regards. FULL CODE: Program.cs using System; using System.Windows.Forms; namespace Lala { static class Program { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } } } Form1.cs using System; using System.Net; using System.Windows.Forms; namespace Lala { public partial class Form1 : Form { WebClient client = new WebClient(); public Form1() { client.DownloadDataCompleted += new DownloadDataCompletedEventHandler(client_DownloadDataCompleted); client.DownloadDataAsync(new Uri("http://www.google.com")); InitializeComponent(); } void client_DownloadDataCompleted(object sender, DownloadDataCompletedEventArgs e) { textBox1.Text += "A"; } } partial class Form1 { /// <summary> /// Required designer variable. /// </summary> private System.ComponentModel.IContainer components = null; /// <summary> /// Clean up any resources being used. /// </summary> /// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param> protected override void Dispose(bool disposing) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } #region Windows Form Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { this.button1 = new System.Windows.Forms.Button(); this.textBox1 = new System.Windows.Forms.TextBox(); this.SuspendLayout(); // // button1 // this.button1.Location = new System.Drawing.Point(12, 12); this.button1.Name = "button1"; this.button1.Size = new System.Drawing.Size(75, 23); this.button1.TabIndex = 0; this.button1.Text = "button1"; this.button1.UseVisualStyleBackColor = true; // // textBox1 // this.textBox1.Location = new System.Drawing.Point(12, 41); this.textBox1.Multiline = true; this.textBox1.Name = "textBox1"; this.textBox1.Size = new System.Drawing.Size(468, 213); this.textBox1.TabIndex = 1; // // Form1 // this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F); this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font; this.ClientSize = new System.Drawing.Size(492, 266); this.Controls.Add(this.textBox1); this.Controls.Add(this.button1); this.Name = "Form1"; this.Text = "Form1"; this.ResumeLayout(false); this.PerformLayout(); } #endregion private System.Windows.Forms.Button button1; private System.Windows.Forms.TextBox textBox1; } }

    Read the article

  • segmented controls mangled during initial transition animation

    - by dLux
    greetings and salutations folks, i'm relatively new to objective c & iphone programming, so bare with me if i've overlooked something obvious.. i created a simple app to play with the different transition animations, setting up a couple segmented controls and a slider.. (Flip/Curl), (left/right) | (up/down), (EaseInOut/EaseIn/EaseOut/Linear) i created a view controller class, and the super view controller switches between 2 instances of the sub class. as you can see from the following image, the first time switching to the 2nd instance, while the animation is occurring the segmented controls are mangled; i'd guess they haven't had enuff time to draw themselves completely.. http://img689.imageshack.us/img689/2320/mangledbuttonsduringtra.png they're fine once the animation is done, and any subsequent times.. if i specify cache:NO in the setAnimationTransition it helps, but there still seems to be some sort of progressive reveal for the text in the segmented controls; they still don't seem to be pre-rendered or initialized properly.. (and surely there's a way to do this while caching the view being transitioned to, since in this case the view isn't changing and should be cacheable.) i'm building my code based on a couple tutorials from a book, so i updated the didReceiveMemoryWarning to set the instanced view controllers to nil; when i invoke a memory warning in the simulator, i assume it's purging the other view, and it acts like a first transition after loading, the view being transitioned to appears just like the image above.. i guess it can't hurt to include the code (sorry if it's considered spamming), this is basically half of it, with a similar chunk following this in an else statement, for the case of the 2nd side being present, switching back to the 1st..: - (IBAction)switchViews:(id)sender { [UIView beginAnimations:@"Transition Animation" context:nil]; if (self.sideBViewController.view.superview == nil) // sideA is active, sideB is coming { if (self.sideBViewController == nil) { SideAViewController *sBController = [[SideAViewController alloc] initWithNibName:@"SideAViewController" bundle:nil]; self.sideBViewController = sBController; [sBController release]; } [UIView setAnimationDuration:sideAViewController.transitionDurationSlider.value]; if ([sideAViewController.transitionAnimation selectedSegmentIndex] == 0) { // flip: 0 == left, 1 == right if ([sideAViewController.flipDirection selectedSegmentIndex] == 0) [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromLeft forView:self.view cache:YES]; else [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromRight forView:self.view cache:YES]; } else { // curl: 0 == up, 1 == down if ([sideAViewController.curlDirection selectedSegmentIndex] == 0) [UIView setAnimationTransition:UIViewAnimationTransitionCurlUp forView:self.view cache:YES]; else [UIView setAnimationTransition:UIViewAnimationTransitionCurlDown forView:self.view cache:YES]; } if ([sideAViewController.animationCurve selectedSegmentIndex] == 0) [UIView setAnimationCurve:UIViewAnimationCurveEaseInOut]; else if ([sideAViewController.animationCurve selectedSegmentIndex] == 1) [UIView setAnimationCurve:UIViewAnimationCurveEaseIn]; else if ([sideAViewController.animationCurve selectedSegmentIndex] == 2) [UIView setAnimationCurve:UIViewAnimationCurveEaseOut]; else if ([sideAViewController.animationCurve selectedSegmentIndex] == 3) [UIView setAnimationCurve:UIViewAnimationCurveLinear]; [sideBViewController viewWillAppear:YES]; [sideAViewController viewWillDisappear:YES]; [sideAViewController.view removeFromSuperview]; [self.view insertSubview:sideBViewController.view atIndex:0]; [sideBViewController viewDidAppear:YES]; [sideAViewController viewDidDisappear:YES]; } any other tips or pointers about writing good clean code is also appreciated, i realize i still have a lot to learn.. thank u for ur time, -- d

    Read the article

  • How to design a C / C++ library to be usable in many client languages?

    - by Brian Schimmel
    I'm planning to code a library that should be usable by a large number of people in on a wide spectrum of platforms. What do I have to consider to design it right? To make this questions more specific, there are four "subquestions" at the end. Choice of language Considering all the known requirements and details, I concluded that a library written in C or C++ was the way to go. I think the primary usage of my library will be in programs written in C, C++ and Java SE, but I can also think of reasons to use it from Java ME, PHP, .NET, Objective C, Python, Ruby, bash scrips, etc... Maybe I cannot target all of them, but if it's possible, I'll do it. Requirements It would be to much to describe the full purpose of my library here, but there are some aspects that might be important to this question: The library itself will start out small, but definitely will grow to enormous complexity, so it is not an option to maintain several versions in parallel. Most of the complexity will be hidden inside the library, though The library will construct an object graph that is used heavily inside. Some clients of the library will only be interested in specific attributes of specific objects, while other clients must traverse the object graph in some way Clients may change the objects, and the library must be notified thereof The library may change the objects, and the client must be notified thereof, if it already has a handle to that object The library must be multi-threaded, because it will maintain network connections to several other hosts While some requests to the library may be handled synchronously, many of them will take too long and must be processed in the background, and notify the client on success (or failure) Of course, answers are welcome no matter if they address my specific requirements, or if they answer the question in a general way that matters to a wider audience! My assumptions, so far So here are some of my assumptions and conclusions, which I gathered in the past months: Internally I can use whatever I want, e.g. C++ with operator overloading, multiple inheritance, template meta programming... as long as there is a portable compiler which handles it (think of gcc / g++) But my interface has to be a clean C interface that does not involve name mangling Also, I think my interface should only consist of functions, with basic/primitive data types (and maybe pointers) passed as parameters and return values If I use pointers, I think I should only use them to pass them back to the library, not to operate directly on the referenced memory For usage in a C++ application, I might also offer an object oriented interface (Which is also prone to name mangling, so the App must either use the same compiler, or include the library in source form) Is this also true for usage in C# ? For usage in Java SE / Java EE, the Java native interface (JNI) applies. I have some basic knowledge about it, but I should definitely double check it. Not all client languages handle multithreading well, so there should be a single thread talking to the client For usage on Java ME, there is no such thing as JNI, but I might go with Nested VM For usage in Bash scripts, there must be an executable with a command line interface For the other client languages, I have no idea For most client languages, it would be nice to have kind of an adapter interface written in that language. I think there are tools to automatically generate this for Java and some others For object oriented languages, it might be possible to create an object oriented adapter which hides the fact that the interface to the library is function based - but I don't know if its worth the effort Possible subquestions is this possible with manageable effort, or is it just too much portability? are there any good books / websites about this kind of design criteria? are any of my assumptions wrong? which open source libraries are worth studying to learn from their design / interface / souce? meta: This question is rather long, do you see any way to split it into several smaller ones? (If you reply to this, do it as a comment, not as an answer)

    Read the article

  • Build and migrated to software raid (mdadm) on GPT disk, now can't assemble array

    - by John H
    mdadm, gpt issues, unrecognized partitions. Simplified question: How do I get mdadm to recognize GPT partitions? I have been attempting to convert/copy my Ubuntu 11.10 OS from a single drive to software raid 1. I have done similar in the past, but in this case, I was adding in a drive that has been configured for GPT and I tried to work with that without fully looking into the implications. Currently, I have a non-booting mdadm RAID 1 array of /dev/md127 (the OS assigned that and it keeps picking up). I am booting off of live USB keys, currently System Rescue CD from sysresccd. While gdisk and parted can see all the partitions, most of the OS utilities do not, including mdadm. My main goal is just to make the raid array accessible so I can get pull the data and start fresh (without using GPT). /dev/md127 /dev/sda /dev/sda1 <- GPT type partition /dev/sda1 <- exists within the GPT part, member of md127 /dev/sda2 <- exists within the GPT part, empty /dev/sdb /dev/sdb1 <- GPT type partition /dev/sdb1 <- exists within the GPT part, member of md127 History: POINT A: The original OS was install on sda (actually /dev/sda6). I used a the Ubuntu live usb to add sdb. I got warning from fdisk about GPT so I used gdisk to create a raid partition (sdb1) and mdadm to create a raid1 mirror with a missing drive. I had many issues getting this working (including being unable to get grub to install) but I eventually got it to boot using grub on sda and /dev/md127 off of sdb. So at point A, I had copied my OS from sda6 to md127 on sdb. I then booted into a rescue mode and attempted to get a bootloader onto sdb, which failed. I then discovered my mistake: I had installed the raid onto sdb instead of sdb1, essentially overwriting the sdb1 partition. POINT B: I now had two copies of my data- one on md127/sdb, and one on sda. I destroyed data on sda and created a new GPT table on sda. I then created sda1 for the raid array, and sda2 for a scratch partition. I added sda1 into the raid array and let it rebuild. md127 now covered /dev/sdb and /dev/sda1 as fully active and synced. POINT C: I rebooted onto linux rescue again and was still able to access the raid array. I then removed /dev/sdb from the array and created /dev/sdb1 for the raid. I added sdb1 to the array and let it sync. I was able to mount and access /dev/md127 without issues. Once it completed, both /dev/sda1 and /dev/sdb1 were GPT partitions and actively syncing. POINT D (current): I rebooted again to test if the array would boot and grub failed to load. I booted off of my live thumb drive and found that I can no longer assemble the raid array. mdadm doesn't see the required partitions. -- root@freshdesk /root % uname -a Linux freshdesk 3.0.24-std251-amd64 #2 SMP Sat Mar 17 12:08:55 UTC 2012 x86_64 AMD Athlon(tm) II X4 645 Processor AuthenticAMD GNU/Linux === /proc/partitions and parted look good: root@freshdesk /root % cat /proc/partitions major minor #blocks name 7 0 301788 loop0 8 0 976762584 sda 8 1 732579840 sda1 8 2 244181703 sda2 8 16 732574584 sdb 8 17 732573543 sdb1 8 32 7876607 sdc 8 33 7873349 sdc1 (parted) print all Model: ATA ST31000528AS (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 2 750GB 1000GB 250GB Linux/Windows data Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 Linux RAID raid Model: SanDisk SanDisk Cruzer (scsi) Disk /dev/sdc: 8066MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 31.7kB 8062MB 8062MB primary fat32 boot, lba === # no sda2, and I double the sdb1 is the one shown in parted root@freshdesk /root % blkid /dev/loop0: TYPE="squashfs" /dev/sda1: UUID="75dd6c2d-f0a8-4302-9da4-792cc7d72355" TYPE="ext4" /dev/sdc1: LABEL="PENDRIVE" UUID="1102-3720" TYPE="vfat" /dev/sdb1: UUID="2dd89f15-65bb-ff88-e368-bf24bd0fce41" TYPE="linux_raid_member" root@freshdesk /root % mdadm -E /dev/sda1 mdadm: No md superblock detected on /dev/sda1. # this is probably a result of me attempting to force the array up, putting superblocks on the GPT partition root@freshdesk /root % mdadm -E /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 0.90.00 UUID : 2dd89f15:65bbff88:e368bf24:bd0fce41 Creation Time : Fri Mar 30 19:25:30 2012 Raid Level : raid1 Used Dev Size : 732568320 (698.63 GiB 750.15 GB) Array Size : 732568320 (698.63 GiB 750.15 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 127 Update Time : Sat Mar 31 12:39:38 2012 State : clean Active Devices : 1 Working Devices : 2 Failed Devices : 1 Spare Devices : 1 Checksum : a7d038b3 - correct Events : 20195 Number Major Minor RaidDevice State this 2 8 17 2 spare /dev/sdb1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed 2 2 8 17 2 spare /dev/sdb1 === root@freshdesk /root % mdadm -A /dev/md127 /dev/sda1 /dev/sdb1 mdadm: no recogniseable superblock on /dev/sda1 mdadm: /dev/sda1 has no superblock - assembly aborted root@freshdesk /root % mdadm -A /dev/md127 /dev/sdb1 mdadm: cannot open device /dev/sdb1: Device or resource busy mdadm: /dev/sdb1 has no superblock - assembly aborted

    Read the article

  • can't get texture to work

    - by user583713
    It been a while since I use android opengl but for what ever reason I get a white squre box and not the texture I what on the screen. Oh I do not think this would matter but just in case I put a linerlayout view first then the surfaceview on but anyway Here my code: public class GameEngine { private float vertices[]; private float textureUV[]; private int[] textureId = new int[1]; private FloatBuffer vertextBuffer; private FloatBuffer textureBuffer; private short indices[] = {0,1,2,2,1,3}; private ShortBuffer indexBuffer; private float x, y, z; private float rot, rotX, rotY, rotZ; public GameEngine() { } public void setEngine(float x, float y, float vertices[]){ this.x = x; this.y = y; this.vertices = vertices; ByteBuffer vbb = ByteBuffer.allocateDirect(this.vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); vertextBuffer = vbb.asFloatBuffer(); vertextBuffer.put(this.vertices); vertextBuffer.position(0); vertextBuffer.clear(); ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2); ibb.order(ByteOrder.nativeOrder()); indexBuffer = ibb.asShortBuffer(); indexBuffer.put(indices); indexBuffer.position(0); indexBuffer.clear(); } public void draw(GL10 gl){ gl.glLoadIdentity(); gl.glTranslatef(x, y, z); gl.glRotatef(rot, rotX, rotY, rotZ); gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId[0]); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertextBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_SHORT, indexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); } public void LoadTexture(float textureUV[], GL10 gl, InputStream is) throws IOException{ this.textureUV = textureUV; ByteBuffer tbb = ByteBuffer.allocateDirect(this.textureUV.length * 4); tbb.order(ByteOrder.nativeOrder()); textureBuffer = tbb.asFloatBuffer(); textureBuffer.put(this.textureUV); textureBuffer.position(0); textureBuffer.clear(); Bitmap bitmap = null; try{ bitmap = BitmapFactory.decodeStream(is); }finally{ try{ is.close(); is = null; gl.glGenTextures(1, textureId,0); gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); //Clean up gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId[0]); bitmap.recycle(); }catch(IOException e){ } } } public void setVector(float x, float y, float z){ this.x = x; this.y = y; this.z = z; } public void setRot(float rot, float x, float y, float z){ this.rot = rot; this.rotX = x; this.rotY = y; this.rotZ = z; } public float getZ(){ return z; } }

    Read the article

  • c# send recive object over network?

    - by Data-Base
    Hello, I'm working on a server/client project the client will be asking the server for info and the server will send them back to the client the info may be string,number, array, list, arraylist or any other object I found allot of examples but I faced issues!!!! the solution I found so far is to serialize the object (data) and send it then de-serialize it to process here is the server code public void RunServer(string SrvIP,int SrvPort) { try { var ipAd = IPAddress.Parse(SrvIP); /* Initializes the Listener */ if (ipAd != null) { var myList = new TcpListener(ipAd, SrvPort); /* Start Listeneting at the specified port */ myList.Start(); Console.WriteLine("The server is running at port "+SrvPort+"..."); Console.WriteLine("The local End point is :" + myList.LocalEndpoint); Console.WriteLine("Waiting for a connection....."); while (true) { Socket s = myList.AcceptSocket(); Console.WriteLine("Connection accepted from " + s.RemoteEndPoint); var b = new byte[100]; int k = s.Receive(b); Console.WriteLine("Recieved..."); for (int i = 0; i < k; i++) Console.Write(Convert.ToChar(b[i])); string cmd = Encoding.ASCII.GetString(b); if (cmd.Contains("CLOSE-CONNECTION")) break; var asen = new ASCIIEncoding(); // sending text s.Send(asen.GetBytes("The string was received by the server.")); // the line bove to be modified to send serialized object? Console.WriteLine("\nSent Acknowledgement"); s.Close(); Console.ReadLine(); } /* clean up */ myList.Stop(); } } catch (Exception e) { Console.WriteLine("Error..... " + e.StackTrace); } } here is the client code that should return an object public object runClient(string SrvIP, int SrvPort) { object obj = null; try { var tcpclnt = new TcpClient(); Console.WriteLine("Connecting....."); tcpclnt.Connect(SrvIP, SrvPort); // use the ipaddress as in the server program Console.WriteLine("Connected"); Console.Write("Enter the string to be transmitted : "); var str = Console.ReadLine(); Stream stm = tcpclnt.GetStream(); var asen = new ASCIIEncoding(); if (str != null) { var ba = asen.GetBytes(str); Console.WriteLine("Transmitting....."); stm.Write(ba, 0, ba.Length); } var bb = new byte[2000]; var k = stm.Read(bb, 0, bb.Length); string data = null; for (var i = 0; i < k; i++) Console.Write(Convert.ToChar(bb[i])); //convert to object code ?????? Console.ReadLine(); tcpclnt.Close(); } catch (Exception e) { Console.WriteLine("Error..... " + e.StackTrace); } return obj; } I need to know a good serialize/serialize and how to integrate it into the solution above :-( I would be really thankful for any help cheers

    Read the article

  • How to make Net::Msmgr run?

    - by codeholic
    There's Net::Msmgr module on CPAN. It's written clean and the code looks trustworthy at the first glance. However this module seems to be beta and there is little documentation and no tests :-/ Has anyone used this module in production? I haven't managed to make it run by now, because it requires all event loop processing to be done in the application and as I've already said there is little documentation and no working examples to study. That's where I've gone so far: #!/usr/bin/perl use strict; use warnings; use Event; use Net::Msmgr::Object; use Net::Msmgr::Session; use Net::Msmgr::User; use constant DEBUG => 511; use constant EVENT_TIMEOUT => 5; # seconds my ($username, $password) = qw/[email protected] my.password/; my $buddy = '[email protected]'; my $user = Net::Msmgr::User->new(user => $username, password => $password); my $session = Net::Msmgr::Session->new; $session->debug(DEBUG); $session->login_handler(\&login_handler); $session->user($user); my $conv; sub login_handler { my $self = shift; print "LOGIN\n"; $self->ui_state_nln; $conv = $session->ui_new_conversation; $conv->invite($buddy); } our %watcher; sub ConnectHandler { my ($connection) = @_; warn "CONNECT\n"; my $socket = $connection->socket; $watcher{$connection} = Event->io(fd => $socket, cb => [ $connection, '_recv_message' ], poll => 're', desc => 'recv_watcher', repeat => 1); } sub DisconnectHandler { my $connection = shift; print "DISCONNECT\n"; $watcher{$connection}->cancel; } $session->connect_handler(\&ConnectHandler); $session->disconnect_handler(\&DisconnectHandler); $session->Login; Event::loop(); That's what it outputs: Dispatch Server connecting to: messenger.hotmail.com:1863 Dispatch Server connected CONNECT Dispatch Server >>>VER 1 MSNP2 CVR0 --> VER 1 MSNP2 CVR0 Dispatch Server >>>USR 2 MD5 I [email protected] --> USR 2 MD5 I [email protected] Dispatch Server <<<VER 1 CVR0 <-- VER 1 CVR0 And that's all, here it hangs. The handler on login is not being triggered. What am I doing wrong?

    Read the article

  • Windows 8, NVIDIA graphics recognition fails

    - by Roy Grubb
    I just installed Windows 8 Pro OEM 64-bit (clean install) and it won't properly recognize my graphics adapter. When I installed Win8, it automatically installed the BasicDisplay.sys driver dated 6/21/2006. 6.2.9200.16384 (win8_rtm.120725-1247). Hardware - Mobo:MSi G41M-P33 Combo CPU:Intel CoreDuo 6600 Graphics:NVIDIA GeForce 9400GT *OS* - Windows 8 Pro 64-bit OEM The graphics adapter worked fine in Windows XP. The PC is a generic box, bought locally and its mobo failed recently, so I replaced it with the G41M. Microsoft wouldn't let me re-activate Windows XP with a different mobo, so I installed Win8, which appears to work except as described next. Win8 only partially recognizes the graphics adapter and won't allow NVIDIA latest driver installer to see that it's an NVIDIA card. As a result, OpenGL doesn't work, and this is needed by the software I most use. Other than that the graphics look OK. When I say 'partially recognizes', I mean that via the Control Panel, I can see that the adapter is described as NVIDIA, but the driver remains stuck at Microsoft Basic Display Adapter no matter what I try, including "Update driver..." in adapter properties. Display Screen Resolution Advanced Settings Adapter shows: Adapter Type: Microsoft Basic Display Adapter Chip Type: NVIDIA DAC Type: NVIDIA Corporation Bios Information: G27 Board - p381n17 Don't know what this means ... no mention of 9400GT Total Available Graphics Memory: 256 MB Dedicated Video Memory: 0 MB In fact the adapter has 512MB on-board video memory. System Video Memory: 0 MB Shared System Memory: 256 MB And Control Panel Device Manager Display adapters just shows Microsoft Basic Display Adapter. No other graphics adapter, and no unknown device or yellow question mark. What I have tried so far: 1. Cleared CMOS and reset. Updated BIOS and all mobo drivers as follows: 1st I used Driver Reviver to see if any driver updates were required. It found some but I didn't use that to get the drivers. Then I switched to MSi's own mobo driver utility Live Update 5. This also showed the board needed to update several so I used it to fetch the new drivers. After that it showed that everything was up to date and I checked with Driver Reviver again, which also reported no drivers now needed updating. Rebooted. Went to the NVIDIA site to get the latest graphics adapter driver. Their auto-detect "Option 2: Automatically find drivers for my NVIDIA products" said "The NVIDIA Smart Scan was unable to evaluate your system hardware. Please use Option 1 to manually find drivers for your NVIDIA products." So I downloaded 310.70-desktop-win8-win7-winvista-64bit-international-whql.exe, which lists 9400 GT under supported products, but when I run it, it says: "NVIDIA Installer cannot continue This graphics driver could not find compatible graphics hardware." Connected the display to the on-board Intel graphics (G41 Intel Express), removed the NVIDIA card and rebooted, changed to internal graphics in CMOS. Again it installs the MS Basic Display Adapter, and can't properly run my s/w that needs OpenGL. It runs on other machines with Intel Express graphics (WinXP and 7) Shut down and pulled out the power cord. Held start button to discharge all capacitors. Removed and re-inserted NVIDIA adapter in PCI-E slot and made sure properly seated. Connected the monitor to the card, screwed plug to socket. Reconnected power cord. Started and checked in BIOS that Primary Graphics Adapter was set to PCI-E. Started Windows. Uninstalled MS Basic Display Adapter in Device Manager. Screen blanks briefly, reappears. No Graphics adapter entry was then visible in Device Manager. Restarted PC. MS Basic Display Adapter Visible again in Device Manager. Clicked in Device Manager View Show hidden devices. No other graphics adapter appears, no unknown devices. Rebooted. Tried Scan for Hardware changes. None detected. Tried right-click on MS Basic Display Adapter Properties Driver Update Driver... Search automatically. It replied that it had determined driver was up to date. I checked that there were no graphic driver-related entries in Programs and Features that I could delete (none). Searched for any other drivers with nvidia in their name and deleted them, just keeping the 306.97 installer exe file. Did a Windows Update. Ran GPU-Z which shows (main items): Microsoft Basic Display Adapter GPU G72 BIOS 5.72.22.76.88 Device ID 10DE - 01D5 DDR2 Bus Width 32 Bit Memory size 64MB Driver Version nvlddmkm 6.2.9200.16384 (ForceWare 0.00) / Win8 64 NVIDIA SLI Unknown in the drop-down at the foot, "Microsoft Basic Display Adapter" is the only option If I swap hard disks in that machine to one with a Ubuntu 10.4 installation (originally installed on the same PC), lspci shows "VGA compatible controller as NVIDIA Corporation Device 01d5 (rev a1) (prog-if 00 [VGA controller])" and "kernel driver in use: nvidia" I'm out of ideas for new things to try and would be really grateful of suggestions. Thanks!

    Read the article

  • Dynamically loading modules in Python (+ multi processing question)

    - by morpheous
    I am writing a Python package which reads the list of modules (along with ancillary data) from a configuration file. I then want to iterate through each of the dynamically loaded modules and invoke a do_work() function in it which will spawn a new process, so that the code runs ASYNCHRONOUSLY in a separate process. At the moment, I am importing the list of all known modules at the beginning of my main script - this is a nasty hack I feel, and is not very flexible, as well as being a maintenance pain. This is the function that spawns the processes. I will like to modify it to dynamically load the module when it is encountered. The key in the dictionary is the name of the module containing the code: def do_work(work_info): for (worker, dataset) in work_info.items(): #import the module defined by variable worker here... # [Edit] NOT using threads anymore, want to spawn processes asynchronously here... #t = threading.Thread(target=worker.do_work, args=[dataset]) # I'll NOT dameonize since spawned children need to clean up on shutdown # Since the threads will be holding resources #t.daemon = True #t.start() Question 1 When I call the function in my script (as written above), I get the following error: AttributeError: 'str' object has no attribute 'do_work' Which makes sense, since the dictionary key is a string (name of the module to be imported). When I add the statement: import worker before spawning the thread, I get the error: ImportError: No module named worker This is strange, since the variable name rather than the value it holds are being used - when I print the variable, I get the value (as I expect) whats going on? Question 2 As I mentioned in the comments section, I realize that the do_work() function written in the spawned children needs to cleanup after itself. My understanding is to write a clean_up function that is called when do_work() has completed successfully, or an unhandled exception is caught - is there anything more I need to do to ensure resources don't leak or leave the OS in an unstable state? Question 3 If I comment out the t.daemon flag statement, will the code stil run ASYNCHRONOUSLY?. The work carried out by the spawned children are pretty intensive, and I don't want to have to be waiting for one child to finish before spawning another child. BTW, I am aware that threading in Python is in reality, a kind of time sharing/slicing - thats ok Lastly is there a better (more Pythonic) way of doing what I'm trying to do? [Edit] After reading a little more about Pythons GIL and the threading (ahem - hack) in Python, I think its best to use separate processes instead (at least IIUC, the script can take advantage of multiple processes if they are available), so I will be spawning new processes instead of threads. I have some sample code for spawning processes, but it is a bit trivial (using lambad functions). I would like to know how to expand it, so that it can deal with running functions in a loaded module (like I am doing above). This is a snippet of what I have: def do_mp_bench(): q = mp.Queue() # Not only thread safe, but "process safe" p1 = mp.Process(target=lambda: q.put(sum(range(10000000)))) p2 = mp.Process(target=lambda: q.put(sum(range(10000000)))) p1.start() p2.start() r1 = q.get() r2 = q.get() return r1 + r2 How may I modify this to process a dictionary of modules and run a do_work() function in each loaded module in a new process?

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >