Search Results

Search found 49453 results on 1979 pages for 'memory mapped files'.

Page 481/1979 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • Best Practices - Core allocation

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (also called Logical Domains) Introduction SPARC T-series servers currently have up to 4 CPU sockets, each of which has up to 8 or (on SPARC T3) 16 CPU cores, while each CPU core has 8 threads, for a maximum of 512 dispatchable CPUs. The defining feature of Oracle VM Server for SPARC is that each domain is assigned CPU threads or cores for its exclusive use. This avoids the overhead of software-based time-slicing and emulation (or binary rewriting) of system state-changing privileged instructions used in traditional hypervisors. To create a domain, administrators specify either the number of CPU threads or cores that the domain will own, as well as its memory and I/O resources. When CPU resources are assigned at the individual thread level, the logical domains constraint manager attempts to assign threads from the same cores to a domain, and avoid "split core" situations where the same CPU core is used by multiple domains. Sometimes this is unavoidable, especially when domains are allocated and deallocated CPUs in small increments. Why split cores can matter Split core allocations can silenty reduce performance because multiple domains with different address spaces and memory contents are sharing the core's Level 1 cache (L1$). This is called false cache sharing since even identical memory addresses from different domains must point to different locations in RAM. The effect of this is increased contention for the cache, and higher memory latency for each domain using that core. The degree of performance impact can be widely variable. For applications with very small memory working sets, and with I/O bound or low-CPU utilization workloads, it may not matter at all: all machines wait for work at the same speed. If the domains have substantial workloads, or are critical to performance then this can have an important impact: This blog entry was inspired by a customer issue in which one CPU core was split among 3 domains, one of which was the control and service domain. The reported problem was increased I/O latency in guest domains, but the root cause might be higher latency servicing the I/O requests due to the control domain being slowed down. What to do about it Split core situations are easily avoided. In most cases the logical domain constraint manager will avoid it without any administrative action, but it can be entirely prevented by doing one of the several actions: Assign virtual CPUs in multiples of 8 - the number of threads per core. For example: ldm set-vcpu 8 mydomain or ldm add-vcpu 24 mydomain. Each domain will then be allocated on a core boundary. Use the whole core constraint when assigning CPU resources. This allocates CPUs in increments of entire cores instead of virtual CPU threads. The equivalent of the above commands would be ldm set-core 1 mydomain or ldm add-core 3 mydomain. Older syntax does the same thing by adding the -c flag to the add-vcpu, rm-vcpu and set-vcpu commands, but the new syntax is recommended. When whole core allocation is used an attempt to add cores to a domain fails if there aren't enough completely empty cores to satisfy the request. See https://blogs.oracle.com/sharakan/entry/oracle_vm_server_for_sparc4 for an excellent article on this topic by Eric Sharakan. Don't obsess: - if the workloads have minimal CPU requirements and don't need anywhere near a full CPU core, then don't worry about it. If you have low utilization workloads being consolidated from older machines onto a current T-series, then there's no need to worry about this or to assign an entire core to domains that will never use that much capacity. In any case, make sure the most important domains have their own CPU cores, in particular the control domain and any I/O or service domain, and of course any important guests. Summary Split core CPU allocation to domains can potentially have an impact on performance, but the logical domains manager tends to prevent this situation, and it can be completely and simply avoided by allocating virtual CPUs on core boundaries.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 6

    - by MarkPearl
    Learning Outcomes Discuss the physical characteristics of magnetic disks Describe how data is organized and accessed on a magnetic disk Discuss the parameters that play a role in the performance of magnetic disks Describe different optical memory devices Magnetic Disk The way data is stored on and retried from magnetic disks Data is recorded on and later retrieved form the disk via a conducting coil named the head (in many systems there are two heads) The writ mechanism exploits the fact that electricity flowing through a coil produces a magnetic field. Electric pulses are sent to the write head, and the resulting magnetic patterns are recorded on the surface below with different patterns for positive and negative currents The physical characteristics of a magnetic disk   Summarize from book   The factors that play a role in the performance of a disk Seek time – the time it takes to position the head at the track Rotational delay / latency – the time it takes for the beginning of the sector to reach the head Access time – the sum of the seek time and rotational delay Transfer time – the time it takes to transfer data RAID The rate of improvement in secondary storage performance has been considerably less than the rate for processors and main memory. Thus secondary storage has become a bit of a bottleneck. RAID works on the concept that if one disk can be pushed so far, additional gains in performance are to be had by using multiple parallel components. Points to note about RAID… RAID is a set of physical disk drives viewed by the operating system as a single logical drive Data is distributed across the physical drives of an array in a scheme known as striping Redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk failure (not supported by RAID 0 or RAID 1) Interesting to note that the increase in the number of drives, increases the probability of failure. To compensate for this decreased reliability RAID makes use of stored parity information that enables the recovery of data lost due to a disk failure.   The RAID scheme consists of 7 levels…   Category Level Description Disks Required Data Availability Large I/O Data Transfer Capacity Small I/O Request Rate Striping 0 Non Redundant N Lower than single disk Very high Very high for both read and write Mirroring 1 Mirrored 2N Higher than RAID 2 – 5 but lower than RAID 6 Higher than single disk Up to twice that of a signle disk for read Parallel Access 2 Redundant via Hamming Code N + m Much higher than single disk Highest of all listed alternatives Approximately twice that of a single disk Parallel Access 3 Bit interleaved parity N + 1 Much higher than single disk Highest of all listed alternatives Approximately twice that of a single disk Independent Access 4 Block interleaved parity N + 1 Much higher than single disk Similar to RAID 0 for read, significantly lower than single disk for write Similar to RAID 0 for read, significantly lower than single disk for write Independent Access 5 Block interleaved parity N + 1 Much higher than single disk Similar to RAID 0 for read, lower than single disk for write Similar to RAID 0 for read, generally  lower than single disk for write Independent Access 6 Block interleaved parity N + 2 Highest of all listed alternatives Similar to RAID 0 for read; lower than RAID 5 for write Similar to RAID 0 for read, significantly lower than RAID 5  for write   Read page 215 – 221 for detailed explanation on RAID levels Optical Memory There are a variety of optical-disk systems available. Read through the table on page 222 – 223 Some of the devices include… CD CD-ROM CD-R CD-RW DVD DVD-R DVD-RW Blue-Ray DVD Magnetic Tape Most modern systems use serial recording – data is lade out as a sequence of bits along each track. The typical recording used in serial is referred to as serpentine recording. In this technique when data is being recorded, the first set of bits is recorded along the whole length of the tape. When the end of the tape is reached the heads are repostioned to record a new track, and the tape is again recorded on its whole length, this time in the opposite direction. That process continued back and forth until the tape is full. To increase speed, the read-write head is capable of reading and writing a number of adjacent tracks simultaneously. Data is still recorded serially along individual tracks, but blocks in sequence are stored on adjacent tracks as suggested. A tape drive is a sequential access device. Magnetic tape was the first kind of secondary memory. It is still widely used as the lowest-cost, slowest speed member of the memory hierarchy.

    Read the article

  • Mastering snow and Java development at jDays in Gothenburg

    - by JavaCecilia
    Last weekend, I took the train from Stockholm to Gothenburg to attend and present at the new Java developer conference jDays. It was professionally arranged in the Swedish exhibition hall close to the amusement park Liseberg and we got a great deal out of the top-level presenters and hallway discussions. Understanding and Improving Your Java Process Our main purpose was to spread information on JVM and our monitoring tools for Java processes, so I held a crash course in the most important terms and concepts if you want to affect the performance of your Java process. From the beginning - the JVM specification to interpretation of heap usage graphs. For correct analysis, you also need to understand something about process memory - you need space for the Java heap (-Xms for initial size and -Xmx for max heap size), but the process memory also contain the thread stacks (to a size of -Xss), JVM internal data structures used for keeping track of Java objects on the heap, method compilation/optimization, native libraries, etc. If you get long pause times, make sure to monitor your application, see the allocation rate and frequency of pause times.My colleague Klara Ward then held a presentation on the Java Mission Control product, the profiling and diagnostics tools suite for HotSpot, coming soon. The room was packed and very appreciated, Klara demonstrated four different scenarios, e.g. how to diagnose and fix latencies due to lock contention for logging.My German colleague, OpenJDK ambassador Dalibor Topic travelled to Sweden to do the second keynote on "Make the Future Java". He let us in on the coming features and roadmaps of Java, now delivering major versions on a two-year schedule (Java 7 2011, Java 8 2013, etc). Also letting us in on where to download early versions of 8, to report problems early on. Software Development in teams Being a scout leader, I'm drilled in different team building and workshop techniques, creating strong groups - of course, I had to attend Henrik Berglund's session on building successful teams. He spoke about the importance of clear goals, autonomy and agreed processes. Thomas Sundberg ended the conference by doing live remote pair programming with Alex in Rumania and a concrete tips for people wanting to try it out (for local collaboration, remember to wash and change clothes). Memory Master Keynote The conference keynote was delivered by the Swedish memory master Mattias Ribbing, showing off by remembering the order of a deck of cards he'd seen once. He made it interactive by forcing the audience to learn a memory mastering technique of remembering ten ordered things by heart, asking us to shout out the order backwards and we made it! I desperately need this - bought the book, will get back on the subject. Continuous Delivery The most impressive presenter was Axel Fontaine on Continuous Delivery. Very well prepared slides with key images of his message and moved about the stage like a rock star. The topic is of course highly interesting, how to create an infrastructure enabling immediate feedback to developers and ability to release your product several times per day. Tomek Kaczanowski delivered a funny and useful presentation on good and bad tests, providing comic relief with poorly written tests and the useful rules of thumb how to rewrite them. To conclude, we had a great time and hope to see you at jDays next year :)

    Read the article

  • Problem Burning a DVD with Windows 7

    - by littlechris
    Hi, When I burn my files to a DVD using the Windows 7 drag and drop I can't actually see my files. Windows doesn't seem to finsih the DVD, even though I have followed the steps to do this. When I put the dvd in my XP machine it can see the files - so I know they are definatley being written correctly. Why can't I seem to finsh the DVD correctly? Is there some view I have set to see files already written? Thanks

    Read the article

  • Is there a way to combine multiple TextMate windows?

    - by yahelc
    After files have already been opened in separate windows, is there a way to combine them in TextMate 1.5.9? Or, is there a way to get CyberDuck to open group files edited at the same time as TextMate projects? If you open a folder of files in TextMate, it groups them all in one window like such: But, if I open multiple files separately (as my FTP forces), they appear in multiple windows. Is there any way to combine multiple TextMate windows into one, so they appear grouped like in the screenshot above?

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • trying to setup wireless

    - by JohnMerlino
    I'm trying to set up wireless on vostro 1520 dell laptop, with latest Ubuntu install. Here's the output of some of the commands that I was told to run: lshw -C network viggy@ubuntu:~$ lshw -C network WARNING: you should run this program as super-user. *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:08:00.0 logical name: eth0 version: 03 serial: 00:24:e8:da:84:25 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168d-1.fw ip=192.168.2.6 latency=0 multicast=yes port=MII speed=100Mbit/s resources: irq:47 ioport:3000(size=256) memory:f6004000-f6004fff memory:f6000000-f6003fff memory:f6020000-f603ffff *-network description: Network controller product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:0e:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:18 memory:fa000000-fa003fff *-network DISABLED description: Wireless interface physical id: 1 logical name: wlan0 serial: 0c:60:76:05:ee:74 capabilities: ethernet physical wireless configuration: broadcast=yes driver=b43 driverversion=3.2.0-29-generic firmware=N/A multicast=yes wireless=IEEE 802.11bg lspci 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 03) 00:1c.2 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 3 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 4 (rev 03) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 03) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 08:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) 0e:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) 1a:00.0 FireWire (IEEE 1394): O2 Micro, Inc. Device 10f7 (rev 01) 1a:00.1 SD Host controller: O2 Micro, Inc. Device 8120 (rev 01) 1a:00.2 Mass storage controller: O2 Micro, Inc. Device 8130 (rev 01) iwconfig lo no wireless extensions. wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on eth0 no wireless extensions. At this point in time, I don't have wireless.

    Read the article

  • restore ntbackup on server 2003 share

    - by user38040
    i have access to a share on a windows 2003 server... i can create files and folders in this folder... i used ntbackup to backup my files from this share, but when i delete/rename my folder and try to restore all that is restored is the folders the files are not restored... although i can restore the files to a local location thanks

    Read the article

  • Data breakpoints to find points where data gets broken

    - by raccoon_tim
    When working with a large code base, finding reasons for bizarre bugs can often be like finding a needle in a hay stack. Finding out why an object gets corrupted without no apparent reason can be quite daunting, especially when it seems to happen randomly and totally out of context. Scenario Take the following scenario as an example. You have defined the a class that contains an array of characters that is 256 characters long. You now implement a method for filling this buffer with a string passed as an argument. At this point you mistakenly expect the buffer to be 256 characters long. At some point you notice that you require another character buffer and you add that after the previous one in the class definition. You now figure that you don’t need the 256 characters that the first member can hold and you shorten that to 128 to conserve space. At this point you should start thinking that you also have to modify the method defined above to safeguard against buffer overflow. It so happens, however, that in this not so perfect world this does not cross your mind. Buffer overflow is one of the most frequent sources for errors in a piece of software and often one of the most difficult ones to detect, especially when data is read from an outside source. Many mass copy functions provided by the C run-time provide versions that have boundary checking (defined with the _s suffix) but they can not guard against hard coded buffer lengths that at some point get changed. Finding the bug Getting back to the scenario, you’re now wondering why does the second string get modified with data that makes no sense at all. Luckily, Visual Studio provides you with a tool to help you with finding just these kinds of errors. It’s called data breakpoints. To add a data breakpoint, you first run your application in debug mode or attach to it in the usual way, and then go to Debug, select New Breakpoint and New Data Breakpoint. In the popup that opens, you can type in the memory address and the amount of bytes you wish to monitor. You can also use an expression here, but it’s often difficult to come up with an expression for data in an object allocated on the heap when not in the context of a certain stack frame. There are a couple of things to note about data breakpoints, however. First of all, Visual Studio supports a maximum of four data breakpoints at any given time. Another important thing to notice is that some C run-time functions modify memory in kernel space which does not trigger the data breakpoint. For instance, calling ReadFile on a buffer that is monitored by a data breakpoint will not trigger the breakpoint. The application will now break at the address you specified it to. Often you might immediately spot the issue but the very least this feature can do is point you in the right direction in search for the real reason why the memory gets inadvertently modified. Conclusions Data breakpoints are a great feature, especially when doing a lot of low level operations where multiple locations modify the same data. With the exception of some special cases, like kernel memory modification, you can use it whenever you need to check when memory at a certain location gets changed on purpose or inadvertently.

    Read the article

  • Backup Software that Prompts when Changes are made

    - by Noah
    Here is what I want to do. Hit backup and backup all my files to a jump drive. Take files on the road with me and edit them on the jumpdrive with my netbook. Come back to my computer and hit restore and have the files uploaded back to my computer, but I want to be prompted what files have changed and if I want to replace or keep the old file. Thanks in advance!

    Read the article

  • Does anybody knows a replacement for TorrentSpy?

    - by Ither
    Hi, I've used TorrentSpy since 2004 and it does a pretty good job but it is very slow with torrent files that have hundreds or thousands of files. Does anyboy knows a better tool for the same job in XP? If you didn't know it, TorrentSpy show the data contained in a torrent file in a readable way: URL's tracker, number of files, its size, number of complete and incomplete downloaders, etc. Edit: What I want is something that can be used from explorer right-clicking in the torrent file, like MediaInfo does with media files.

    Read the article

  • How does RSYNC does incremental Backups

    - by Mirage
    How does RSYNC knows which files are chnages and which are not. Does it logs its data anywhere in the file. Because I want to do incrrmental backups but first it will transfer all files. So my main question is if I upload the initial files via FTP but not by Rsync. Will Rsync still skips those existing files or it will upload everythibg on first run

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • rsync delete remote duplicates

    - by BlakBat
    I'm trying to delete remote duplicate files without transferring the non-existing files, and without updating the existing files. If I specify both --existing and --ignore-existing (along with "-av --remove-source-files", the operation is a no-op and nothing will be transfered, but nothing will be deleted either. The best I got so far is to make a local copy of destination, use rsync without --ignore-existing, then rsync my local copy on top of the destination

    Read the article

  • Multi level or progressive or incremental file search

    - by iraSenthil
    I am looking for a GUI tool in Windows, where I can do search and pass the result to next search and continue. I know I can do this in command line by piping one search result to another, but I am looking for a GUI tool. Here is a sample search, I would like to find all files that has extension ".java". From the result, find all files that has a specific word. From the result, select few files, and search only those files with another keyword.

    Read the article

  • Using pip to install modules in python failing

    - by James N
    I'm having trouble installing python modules using pip. Below is the output from the command window: Note that I installed pip immediately before trying to install GDAL module. I am on a w7 64bit machine running python 2.7 Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\jnunn\Desktop>python get-pip.py Downloading/unpacking pip Downloading pip-1.2.1.tar.gz (102Kb): 102Kb downloaded Running setup.py egg_info for package pip warning: no files found matching '*.html' under directory 'docs' warning: no previously-included files matching '*.txt' found under directory 'docs\_build' no previously-included directories found matching 'docs\_build\_sources' Installing collected packages: pip Running setup.py install for pip warning: no files found matching '*.html' under directory 'docs' warning: no previously-included files matching '*.txt' found under directory 'docs\_build' no previously-included directories found matching 'docs\_build\_sources' Installing pip-script.py script to C:\Python26\ArcGIS10.1\Scripts Installing pip.exe script to C:\Python26\ArcGIS10.1\Scripts Installing pip.exe.manifest script to C:\Python26\ArcGIS10.1\Scripts Installing pip-2.7-script.py script to C:\Python26\ArcGIS10.1\Scripts Installing pip-2.7.exe script to C:\Python26\ArcGIS10.1\Scripts Installing pip-2.7.exe.manifest script to C:\Python26\ArcGIS10.1\Scripts Successfully installed pip Cleaning up... C:\Users\jnunn\Desktop>pip install gdal Downloading/unpacking gdal Downloading GDAL-1.9.1.tar.gz (420kB): 420kB downloaded Running setup.py egg_info for package gdal Installing collected packages: gdal Running setup.py install for gdal building 'osgeo._gdal' extension c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -I../../port -I../../gcore -I../../alg -I../../ogr/ -I C:\Python26\ArcGIS10.1\include -IC:\Python26\ArcGIS10.1\PC -IC:\Python26\ArcGIS1 0.1\lib\site-packages\numpy\core\include /Tpextensions/gdal_wrap.cpp /Fobuild\te mp.win32-2.7\Release\extensions/gdal_wrap.obj gdal_wrap.cpp c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\xlocale(342) : warning C4530: C++ exception handler used, but unwind semantics are not enabled . Specify /EHsc extensions/gdal_wrap.cpp(2853) : fatal error C1083: Cannot open include file : 'cpl_port.h': No such file or directory error: command '"c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\c l.exe"' failed with exit status 2 Complete output from command C:\Python26\ArcGIS10.1\python.exe -c "import se tuptools;__file__='c:\\users\\jnunn\\appdata\\local\\temp\\pip-build\\gdal\\setu p.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec' ))" install --record c:\users\jnunn\appdata\local\temp\pip-f7tgze-record\install -record.txt --single-version-externally-managed: running install running build running build_py creating build creating build\lib.win32-2.7 copying gdal.py -> build\lib.win32-2.7 copying ogr.py -> build\lib.win32-2.7 copying osr.py -> build\lib.win32-2.7 copying gdalconst.py -> build\lib.win32-2.7 copying gdalnumeric.py -> build\lib.win32-2.7 creating build\lib.win32-2.7\osgeo copying osgeo\gdal.py -> build\lib.win32-2.7\osgeo copying osgeo\gdalconst.py -> build\lib.win32-2.7\osgeo copying osgeo\gdalnumeric.py -> build\lib.win32-2.7\osgeo copying osgeo\gdal_array.py -> build\lib.win32-2.7\osgeo copying osgeo\ogr.py -> build\lib.win32-2.7\osgeo copying osgeo\osr.py -> build\lib.win32-2.7\osgeo copying osgeo\__init__.py -> build\lib.win32-2.7\osgeo running build_ext building 'osgeo._gdal' extension creating build\temp.win32-2.7 creating build\temp.win32-2.7\Release creating build\temp.win32-2.7\Release\extensions c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -I../../port -I../../gcore -I../../alg -I../../ogr/ -IC:\P ython26\ArcGIS10.1\include -IC:\Python26\ArcGIS10.1\PC -IC:\Python26\ArcGIS10.1\ lib\site-packages\numpy\core\include /Tpextensions/gdal_wrap.cpp /Fobuild\temp.w in32-2.7\Release\extensions/gdal_wrap.obj gdal_wrap.cpp c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\xlocale(342) : war ning C4530: C++ exception handler used, but unwind semantics are not enabled. Sp ecify /EHsc extensions/gdal_wrap.cpp(2853) : fatal error C1083: Cannot open include file: 'c pl_port.h': No such file or directory error: command '"c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.ex e"' failed with exit status 2 ---------------------------------------- Command C:\Python26\ArcGIS10.1\python.exe -c "import setuptools;__file__='c:\\us ers\\jnunn\\appdata\\local\\temp\\pip-build\\gdal\\setup.py';exec(compile(open(_ _file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\u sers\jnunn\appdata\local\temp\pip-f7tgze-record\install-record.txt --single-vers ion-externally-managed failed with error code 1 in c:\users\jnunn\appdata\local\ temp\pip-build\gdal Storing complete log in C:\Users\jnunn\pip\pip.log C:\Users\jnunn\Desktop> I have tried to use easy_install before too, and it came back with a common error to this: c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\xlocale(342) : war ning C4530: C++ exception handler used, but unwind semantics are not enabled. Sp ecify /EHsc extensions/gdal_wrap.cpp(2853) : fatal error C1083: Cannot open include file: 'c pl_port.h': No such file or directory error: command '"c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.ex e"' failed with exit status 2 Plus the following additional pip.log: Exception information: Traceback (most recent call last): File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\basecommand.py", line 107, in main status = self.run(options, args) File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\commands\install.py", line 261, in run requirement_set.install(install_options, global_options) File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\req.py", line 1166, in install requirement.install(install_options, global_options) File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\req.py", line 589, in install cwd=self.source_dir, filter_stdout=self._filter_install, show_stdout=False) File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\util.py", line 612, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command C:\Python26\ArcGIS10.1\python.exe -c "import setuptools;__file__='c:\\users\\jnunn\\appdata\\local\\temp\\pip-build\\gdal\\setup.py';exec(compile(open(__file__).read().replace('\r \n', '\n'), __file__, 'exec'))" install --record c:\users\jnunn\appdata\local\temp\pip-f7tgze-record\install-record.txt --single-version-externally-managed failed with error code 1 in c:\users\jnunn\appdata \local\temp\pip-build\gdal

    Read the article

  • Unity.ResolutionFailedException - Resolution of the dependency failed

    - by Anibas
    I have the following code: public static IEngine CreateEngine() { UnityContainer container = Unity.LoadUnityContainer(DefaultStrategiesContainerName); IEnumerable<IStrategy> strategies = container.ResolveAll<IStrategy>(); ITraderProvider provider = container.Resolve<ITraderProvider>(); return new Engine(provider, new List<IStrategy>(strategies)); } and the config: <unity> <typeAliases> <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="weakRef" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="Strategy" type="ADTrader.Core.Contracts.IStrategy, ADTrader.Core" /> <typeAlias alias="Trader" type="ADTrader.Core.Contracts.ITraderProvider, ADTrader.Core" /> </typeAliases> <containers> <container name="strategies"> <types> <type type="Strategy" mapTo="ADTrader.Strategies.ThreeTurningStrategy, ADTrader.Strategies" name="1" /> <type type="Trader" mapTo="ADTrader.MbTradingProvider.MBTradingProvider, ADTrader.MbTradingProvider" /> </types> </container> </containers></unity> I am getting the following exception: Microsoft.Practices.Unity.ResolutionFailedException: Resolution of the dependency failed, type = "ADTrader.Core.Contracts.ITraderProvider", name = "". Exception message is: The current build operation (build key Build Key[ADTrader.MbTradingProvider.MBTradingProvider, null]) failed: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. (Strategy type BuildPlanStrategy, index 3) --- Microsoft.Practices.ObjectBuilder2.BuildFailedException: The current build operation (build key Build Key[ADTrader.MbTradingProvider.MBTradingProvider, null]) failed: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. (Strategy type BuildPlanStrategy, index 3) --- System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at MBTCOMLib.MbtComMgrClass.EnableSplash(Boolean bEnable) at ADTrader.MbTradingProvider.MBTradingProvider..ctor() at BuildUp_ADTrader.MbTradingProvider.MBTradingProvider(IBuilderContext ) at Microsoft.Practices.ObjectBuilder2.DynamicMethodBuildPlan.BuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.BuildPlanStrategy.PreBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) --- End of inner exception stack trace --- at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.Builder.BuildUp(IReadWriteLocator locator, ILifetimeContainer lifetime, IPolicyList policies, IStrategyChain strategies, Object buildKey, Object existing) at Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) --- End of inner exception stack trace --- at Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) at Microsoft.Practices.Unity.UnityContainer.Resolve(Type t, String name) at Microsoft.Practices.Unity.UnityContainerBase.ResolveT at ADTrader.Engine.EngineFactory.CreateEngine() Any idea?

    Read the article

  • Strawberry Perl command line question

    - by Nano HE
    I replaced ActivePerl with Strawberry Perl on my WinXP last week. I found I must run my Perl script with the command of perl myperl.pl; otherwise I only need run myperl.pl before install Strawberry. How can I only run myperl.pl as before? I checked my environment configuration as below. C:\> Path C:\Program Files\ActiveState Komodo Edit 5\;C:\Perl\site\bin;C:\Perl\bin;C:\Perl\bin\;C:\Program Files\CodeSynthesis XSD 3.2\bin\;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem; C:\Program Files\ATI Technologies\ATI Control Panel;C:\Program Files\Microsoft SQL Server\90\Tools\binn\;C:\Program Files\Common Files\Thunder Network\KanKan \Codecs;C:\strawberry\c\bin;C:\strawberry\perl\bin` Strawberry Perl path already listed in the Path value after install successfully. Anything I missed? Thank you for your suggestion.

    Read the article

  • Perl execution from command line question

    - by Nano HE
    I replaced ActivePerl with Strawberry Perl on my WinXP last week. I found I must run my Perl script with the command of perl myperl.pl; otherwise I only need run myperl.pl before install Strawberry. How can I only run myperl.pl as before? I checked my environment configuration as below. C:\> Path C:\Program Files\ActiveState Komodo Edit 5\;C:\Perl\site\bin;C:\Perl\bin;C:\Perl\bin\;C:\Program Files\CodeSynthesis XSD 3.2\bin\;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem; C:\Program Files\ATI Technologies\ATI Control Panel;C:\Program Files\Microsoft SQL Server\90\Tools\binn\;C:\Program Files\Common Files\Thunder Network\KanKan \Codecs;C:\strawberry\c\bin;C:\strawberry\perl\bin` Strawberry Perl path already listed in the Path value after install successfully. Anything I missed? Thank you for your suggestion.

    Read the article

  • Mercurial to Mercurial to Subversion Workflow Problem

    - by Dalroth
    We're migrating from Subversion to Mercurial. To facilitate the migration, we're creating an intermediate Mercurial repository that is a clone of our Subversion repository. All developers will be begin switching over to the Mercurial repository, and we'll periodically push changes from the intermediate Mercurial repository to the existing Subversion repository. After a period of time, we'll simply obsolete the Subversion repository and the intermediate Mercurial repository will become the new system of record. Dev 1 Local --+--> Mercurial --+--> Subversion Dev 2 Local --+ + Dev 3 Local --+ + Dev 4 -------------------------+ I've been testing this out, but I keep running into a problem when I push changes from my local repository, to the intermediate Mercurial repository, and then up into our Subversion repository. On my local machine, I have a changeset that is committed and ready to be pushed to our intermediate Mercurial repository. Here you can see it is revision #2263 with hash 625... I push only this changeset to the remote repository. So far, everything looks good. The changeset has been pushed. hg update 1 files updated, 0 files merged, 0 files removed, 0 files unresolved I now switch over to the remote repository, and update the working directory. hg push pushing to svn://... searching for changes [r3834] bmurphy: database namespace pulled 1 revisions saving bundle to /srv/hg/repository/.hg/strip-backup/62539f8df3b2-temp adding branch adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files rebase completed Next, I push the change up to Subversion, works great. At this point, the change is in the Subversion repository and I return attention back to my local client. I pull changes to my local machine. Huh? I've now got two changesets. My original changeset appears as a local branch now. The other changeset has a new revision number 2264, and a new hash 10c1... Anyway, I update my local repo to the new revision. I'm now switched over. So, I finally click the "determine and mark outgoing changesets" and as you can see Mercurial still wants to push out my previous changesets even though they've already been pushed. Clearly, I'm doing something wrong. I also can't merge the two revisions. If I merge the two revisions on my local machine, I end up with a "merge" commit. When I push that merge commit out to the intermediate Mercurial repository, I can no longer push changes out to our Subversion repository. I end up with the following problem: hg update 0 files updated, 0 files merged, 0 files removed, 0 files unresolved hg push pushing to svn://... searching for changes abort: Sorry, can't find svn parent of a merge revision. and I have to rollback the merge to get back to a working state. What am I missing?

    Read the article

  • Using Image Source with big images in WPF

    - by xyzzer
    I am working on an application that allows users to manipulate multiple images by using ItemsControl. I started running some tests and found that the app has problems displaying some big images - ie. it did not work with the high resolution (21600x10800), 20MB images from http://earthobservatory.nasa.gov/Features/BlueMarble/BlueMarble_monthlies.php, though it displays the 6200x6200, 60MB Hubble telescope image from http://zebu.uoregon.edu/hudf/hudf.jpg just fine. The original solution just specified an Image control with a Source property pointing at a file on a disk (through a binding). With the Blue Marble file - the image would just not show up. Now this could be just a bug hidden somewhere deep in the funky MVVM + XAML implementation - the visual tree displayed by Snoop goes like: Window/Border/AdornerDecorator/ContentPresenter/Grid/Canvas/UserControl/Border/ContentPresenter/Grid/Grid/Grid/Grid/Border/Grid/ContentPresenter/UserControl/UserControl/Border/ContentPresenter/Grid/Grid/Grid/Grid/Viewbox/ContainerVisual/UserControl/Border/ContentPresenter/Grid/Grid/ItemsControl/Border/ItemsPresenter/Canvas/ContentPresenter/Grid/Grid/ContentPresenter/Image... Now debug this! WPF can be crazy like that... Anyway, it turned out that if I create a simple WPF application - the images load just fine. I tried finding out the root cause, but I don't want to spend weeks on it. I figured the right thing to do might be to use a converter to scale the images down - this is what I have done: ImagePath = @"F:\Astronomical\world.200402.3x21600x10800.jpg"; TargetWidth = 2800; TargetHeight = 1866; and <Image> <Image.Source> <MultiBinding Converter="{StaticResource imageResizingConverter}"> <MultiBinding.Bindings> <Binding Path="ImagePath"/> <Binding RelativeSource="{RelativeSource Self}" /> <Binding Path="TargetWidth"/> <Binding Path="TargetHeight"/> </MultiBinding.Bindings> </MultiBinding> </Image.Source> </Image> and public class ImageResizingConverter : MarkupExtension, IMultiValueConverter { public Image TargetImage { get; set; } public string SourcePath { get; set; } public int DecodeWidth { get; set; } public int DecodeHeight { get; set; } public object Convert(object[] values, Type targetType, object parameter, CultureInfo culture) { this.SourcePath = values[0].ToString(); this.TargetImage = (Image)values[1]; this.DecodeWidth = (int)values[2]; this.DecodeHeight = (int)values[3]; return DecodeImage(); } private BitmapImage DecodeImage() { BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.DecodePixelWidth = (int)DecodeWidth; bi.DecodePixelHeight = (int)DecodeHeight; bi.UriSource = new Uri(SourcePath); bi.EndInit(); return bi; } public object[] ConvertBack(object value, Type[] targetTypes, object parameter, CultureInfo culture) { throw new Exception("The method or operation is not implemented."); } public override object ProvideValue(IServiceProvider serviceProvider) { return this; } } Now this works fine, except for one "little" problem. When you just specify a file path in Image.Source - the application actually uses less memory and works faster than if you use BitmapImage.DecodePixelWidth. Plus with Image.Source if you have multiple Image controls that point to the same image - they only use as much memory as if only one image was loaded. With the BitmapImage.DecodePixelWidth solution - each additional Image control uses more memory and each of them uses more than when just specifying Image.Source. Perhaps WPF somehow caches these images in compressed form while if you specify the decoded dimensions - it feels like you get an uncompressed image in memory, plus it takes 6 times the time (perhaps without it the scaling is done on the GPU?), plus it feels like the original high resolution image also gets loaded and takes up space. If I just scale the image down, save it to a temporary file and then use Image.Source to point at the file - it will probably work, but it will be pretty slow and it will require handling cleanup of the temporary file. If I could detect an image that does not get loaded properly - maybe I could only scale it down if I need to, but Image.ImageFailed never gets triggered. Maybe it has something to do with the video memory and this app just using more of it with the deep visual tree, opacity masks etc. Actual question: How can I load big images as quickly as Image.Source option does it, without using more memory for additional copies and additional memory for the scaled down image if I only need them at a certain resolution lower than original? Also, I don't want to keep them in memory if no Image control is using them anymore.

    Read the article

  • ASP.NET MVC & Silverlight development - on Ubuntu

    - by queen3
    I recently moved my working environment from Windows 7 to Ubuntu, and enjoy this every minute of my working day. My work is currently to develop ASP.NET MVC and Silverlight applications. Thus, important Windows stuff is being still run in VirtualBox (such as IIS, MSSQL, Silverlight 3, and legacy COM stuff). For now, I use Visual Studio under VirtualBox as editor/IDE/debugger. But since I prefer Ubuntu fonts and UI much more I'd like to move at least editor (and better IDE) to native Ubuntu. Things I have already done: I store project files in my home folder and run Visual Studio from \vboxsvr share. With few tricks for ASP.NET it works. I use svn on Ubuntu. I test my ASP.NET MVC site using FireFox/FireBug on Ubuntu. What I need on Ubuntu: SQL client to manage MSSQL. I mostly need querying, but I would miss SQL intellisense support. I enjoy command-line svn a lot, but there're times when it's not enough (e.g. view files / check diffs / selectively commit at the sames time) so I wonder if there're any addons - I don't mean replacement for svn, just addons for rare cases like above. I wonder if there're editors that can provide some C# intellisense. Yes I know about MonoDevelop, but will it provide intellisense without compiling (since I'm going to compile remotely in Win box)? And pretty big topic, what's the best way (editor/IDE) to do "folder-based" development? What I mean: The project is /trunk. Everything is there. I don't want to manually add files to project or like that. The project is the folder and files down there. Main task is to edit files of course. I need a quick way to open / search for files in the project. Like in Resharper, I can click Ctrl-Shift-T (IIRC) and just type file name, a list of matching files in the project folder and below is shown. For example, gedit has file tree browser, but I can't quickly type XYZ to find all XYZ files there; moreover it doesn't automatically switch focus to/from editor; so it's more mouse-oriented; I need 100% keyboard way. I need syntax highlighting for C#/HTML/JS. Most importantly, I need HTML tags autocompletion. I can live without it, but I'll be sorry. I need to run compilation remotely (via ssh I think, invoking NAnt script which does MSBuild) and grab results such as errors and warnings, and I'd prefer to quickly go to error line/file. In short, I need to edit/search/open/svn/compile/run files in some folder. Looks like a case for command-line, but imaging I'm in /trunk and want to open file.cs inside /trunk/foo/bar/boo/far, I wouldn't want to type all this path even with bash autocompletion help. I'd prefer to enter :open file.cs, and maybe then select from list of file.cs and file1.cs. Well, maybe I'll add more soon. Actually, I don't have exact requirements; for example, I don't even know if I need to ask for IDE or editor; or should it have svn support integrated or I need to use it from console; do people work with files from console (search/commit/delete/etc) and open editor from there, or they work from editor/IDE and manage files (search/commit/delete) from there? What's better? I have a feeling that vim might have everything I need. I'd like to confirm that, before I spend a lot of time learning it. No I don't want Emacs. Any other IDE? I like Eclipse, but is it good for such stuff? And after all, do you feel like it's a good way to go? I enjoy Ubuntu, enjoy learning new stuff, and Ubuntu + Windows in VirtualBox actually runs faster than Windows7 on my mahcine, but maybe I need to keep development (editor/files management/etc) in Windows/VirtualBox only, leaving other stuff for Ubuntu?

    Read the article

  • JPA Annotations in Android

    - by dasmaze
    Hello, We have a project that uses JPA/Hibernate on the server side, the mapped entity classes are in their own Library-Project and use Annotations to be mapped to the database. I want to use these classes in an Android-Project - is there any way to ignore the annotations within Android, to use these classes as standard POJOs?

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >