Search Results

Search found 75611 results on 3025 pages for 'copy file'.

Page 773/3025 | < Previous Page | 769 770 771 772 773 774 775 776 777 778 779 780  | Next Page >

  • How is intermediate data organized in MapReduce?

    - by Pedro Cattori
    From what I understand, each mapper outputs an intermediate file. The intermediate data (data contained in each intermediate file) is then sorted by key. Then, a reducer is assigned a key by the master. The reducer reads from the intermediate file containing the key and then calls reduce using the data it has read. But in detail, how is the intermediate data organized? Can a data corresponding to a key be held in multiple intermediate files? What happens when there is too much data corresponding to one key to be held by a single file? In short, how do intermediate partitions differ from intermediate files and how are these differences dealt with in the implementation?

    Read the article

  • How do I dynamically reload content files?

    - by Kikaimaru
    Is there a relatively simple way to dynamically reload content files, such as effect files? I know I can do the following: Detect change of file Run content pipeline to rebuild that specific file Unload ALL content that was loaded Load all content And use double references to reference content files. The problem is with step 3 (and step 2 isn't that nice either). I need to unload everything because if I have model Hero.x which references Model.fx effect, and I change the Model.fx file, I need to reload the Hero.x file which will then call LoadExternalReference on Model.fx. Has someone managed to make this work without rewriting the whole ContentManager (and every ContentReader) and tracking calls to LoadExternalReference?

    Read the article

  • Formalizing programmers errors

    - by Maksee
    Every one of us make errors leading to bugs. Once I wanted to start logging my errors for future analysis, probably mentioning project title, approximate time spent and the most important, the type of error. For example when I copy-pasted a fragment about 'x' and replaced every occurrence of 'x' with 'y' and forgot to replace a tiny piece, this goes to 'copy-paste error'. The usefulness of this approach depends on whether I can formalize my errors at all and probably minimizing the number of types to choose from. Otherwise I would start postponing, ignoring and so on so make this system useless. Are there existing research in this area, probably a known minimum set of errors? Maybe some of you already tried to implement something like this and succeeded/failed?

    Read the article

  • What would cause different rates of packet loss between client and server in UDP?

    - by febreezey
    If I've implemented a reliable UDP file transfer protocol and I have a file that deliberately drops a percentage of packets when I transmit, why would it be more evident that transmission time increases as the packet loss percentage increases going from the client to server as opposed from the server to the client? Is this something that can be explained as a result of the protocol? Here are my numbers from two separate experiments. I kept the max packet size to 500 Bytes and the opposite direction packet loss to 5% with a 1 Megabyte file: Server to Client loss Percentage varied: 1 MB file, 500 b segments, client to server loss 5% 1% : 17253 ms 3% : 3388 ms 5% : 7252 ms 10% : 6229 ms 11% : 12346 ms 13% : 11282 ms 15% : 9252 ms 20% : 11266 ms Client to Server loss percentage varied 1 MB file, 500 b segments, server to client loss 5% 1%: 4227 ms 3%: 4334 ms 5%: 3308 ms 10%: 31350 ms 11%: 36398 ms 13%: 48436 ms 15%: 65475 ms 20%: 120515 ms You can clearly see an exponential increase in the client to server group

    Read the article

  • DataSets and XML - The Simplistic Approach

    One of the first ways I learned how to read xml data from external data sources was by using a DataSet’s ReadXML function. This function takes file path for an XML document and then converts it to a Dataset. This functionality is great when you need a simple way to process an XML document.  In addition the DataSet object also offers a simple way to save data in an xml format by using the WriteXML function. This function saves the current data in the DataSet to an XML file to be used later. DataSet ds  = New DataSet();String filePath = “http://www.yourdomain.com/someData.xml”;String fileSavePath = “C:\Temp\Test.xml”//Read file for this locationds.readxml(filePath);//Save file to this locationds.writexml(fileSavePath); I have used the ReadXML function before when consuming data from external Rss feeds to display on one of my sites.  It allows me to quickly pull in data from external sites with little to no processing. Example site: MyCreditTech.com

    Read the article

  • All my Ubuntu VMs have apt-get update problems

    - by kashani
    I'm running Virtualbox 4.1 on an x86_64 Windows 7 host. I've got a collection of 12.04 and 10.04 LTS VMs I use to create debs for work. In the last week I started noticing problems on the 12.04 VMs. Tried the usual apt-get clean bit which didn't help. I rolled a new 11.10 VM for testing a Worpress upgrade. This VM has never been able to run apt-get update without errors. The interesting errors look like this: Get: 8 http://security.ubuntu.com oneiric-security/main Translation-en_US [344 B] 14% [7 Sources 48686/877 kB 6%] [Waiting for headers]bzip2: (stdin) is not a bzip2 file. Hit http://security.ubuntu.com oneiric-security/multiverse Translation-en Hit http://security.ubuntu.com oneiric-security/restricted Translation-en Hit http://security.ubuntu.com oneiric-security/universe Translation-en 22% [7 Sources 127526/877 kB 15%] [Waiting for headers]/usr/bin/xz: (stdin): File format not recognized and ends with /usr/bin/xz: (stdin): File format not recognized Ign http://us.archive.ubuntu.com oneiric/main Translation-en_US Ign http://us.archive.ubuntu.com oneiric-updates/main Translation-en_US Fetched 18.5 MB in 47s (392 kB/s) W: GPG error: http://us.archive.ubuntu.com oneiric InRelease: File /var/lib/apt/lists/partial/us.archive.ubuntu.com_ubuntu_dists_oneiric_InRelease doesn't start with a clearsigned message W: GPG error: http://security.ubuntu.com oneiric-security InRelease: File /var/lib/apt/lists/partial/security.ubuntu.com_ubuntu_dists_oneiric-security_InRelease doesn't start with a clearsigned message xv-utils, lzma, etc are all installed. I've reinstalled the VM from scratch three times and up at the same point.

    Read the article

  • RabbitVCS displaying unchanged files on commit

    - by misterjinx
    I have a strange problem with RabbitVCS. I'm inside a working copy directory and I want to commit some files. When I click the commit button, the commit window shows up, but there is a strange situation. Even though I have modified just a few files, the commit window is displaying all the files and directories inside working copy and the checkbox is ticked for each of them, like those files need to be committed. But those files were not changed and already exist in the repo. Please see the image below to understand what I'm saying (the only file that is unversioned/was changed is .htaccess, therefore it should have been the only file listed there). Has this happened to anyone ? It is a bug with RabbitVCS (and probably a solution exists) or am I doing something wrong ?

    Read the article

  • From a DDD perspective is a report generating service a domain service or an infrastructure service?

    - by Songo
    Let assume we have the following service whose responsibility is to generate Excel reports: class ExcelReportService{ public String generateReport(String fileFormatFilePath, ResultSet data){ ReportFormat reportFormat = new ReportFormat(fileFormatFilePath); ExcelDataFormatterService excelDataFormatterService = new ExcelDataFormatterService(); FormattedData formattedData = excelDataFormatterService.format(data); ExcelFileService excelFileService = new ExcelFileService(); String reportPath= excelFileService.generateReport(reportFormat,formattedData); return reportPath; } } This is pseudo code for the service I want to design where: fileFormatFilePath: path to a configuration file where I'll keep the format of my excel file (headers, column widths, number of columns,..etc) data: the actual records returned from the database. This data can't be used directly coz I might need to make further calculations to the data before inserting them to the excel file. ReportFormat: Value object to hold the report format, has methods like getHeaders(), getColumnWidth(),...etc. ExcelDataFormatterService: a service to hold any logic that need to be applied to the data returned from the database before inserting it to the file. FormattedData: Value object the represents the formatted data to be inserted. ExcelFileService: a wrapper top the 3rd party library that generates the excel file. Now how do you determine whether a service is an infrastructure or domain service? I have the following 3 services here: ExcelReportService, ExcelDataFormatterService and ExcelFileService?

    Read the article

  • What algorithms can I use to detect if articles or posts are duplicates?

    - by michael
    I'm trying to detect if an article or forum post is a duplicate entry within the database. I've given this some thought, coming to the conclusion that someone who duplicate content will do so using one of the three (in descending difficult to detect): simple copy paste the whole text copy and paste parts of text merging it with their own copy an article from an external site and masquerade as their own Prepping Text For Analysis Basically any anomalies; the goal is to make the text as "pure" as possible. For more accurate results, the text is "standardized" by: Stripping duplicate white spaces and trimming leading and trailing. Newlines are standardized to \n. HTML tags are removed. Using a RegEx called Daring Fireball URLs are stripped. I use BB code in my application so that goes to. (ä)ccented and foreign (besides Enlgish) are converted to their non foreign form. I store information about each article in (1) statistics table and in (2) keywords table. (1) Statistics Table The following statistics are stored about the textual content (much like this post) text length letter count word count sentence count average words per sentence automated readability index gunning fog score For European languages Coleman-Liau and Automated Readability Index should be used as they do not use syllable counting, so should produce a reasonably accurate score. (2) Keywords Table The keywords are generated by excluding a huge list of stop words (common words), e.g., 'the', 'a', 'of', 'to', etc, etc. Sample Data text_length, 3963 letter_count, 3052 word_count, 684 sentence_count, 33 word_per_sentence, 21 gunning_fog, 11.5 auto_read_index, 9.9 keyword 1, killed keyword 2, officers keyword 3, police It should be noted that once an article gets updated all of the above statistics are regenerated and could be completely different values. How could I use the above information to detect if an article that's being published for the first time, is already existing within the database? I'm aware anything I'll design will not be perfect, the biggest risk being (1) Content that is not a duplicate will be flagged as duplicate (2) The system allows the duplicate content through. So the algorithm should generate a risk assessment number from 0 being no duplicate risk 5 being possible duplicate and 10 being duplicate. Anything above 5 then there's a good possibility that the content is duplicate. In this case the content could be flagged and linked to the article's that are possible duplicates and a human could decide whether to delete or allow. As I said before I'm storing keywords for the whole article, however I wonder if I could do the same on paragraph basis; this would also mean further separating my data in the DB but it would also make it easier for detecting (2) in my initial post. I'm thinking weighted average between the statistics, but in what order and what would be the consequences...

    Read the article

  • Setting up dual monitors, Xorg.conf issues

    - by JTS
    I just got a new computer (W520, Graphics card nVidia GF106 [Quadro 2000]) and installed ubuntu on it using wubi. I have everything working, so I wanted to set it up to be able to use two monitors with an extended screen. I figured I had to edit Xorg.conf, but the file didnt exist. So I tried to create it by booting in recovery mode, and executing Xorg -configure but I am getting these errors: (EE) Failed to load module "vmwgfx" (module does not exist, 0) (EE) vmware: Please ignore the above warnings about not being able to load module/driver vmwgfx (++) Using config file: "/root/xorg.conf.new" (==) Using system config directory "/usr/share/X11/xorg.conf.d" (EE) [drm] No DRICreatedPCIBusID symbol Number of created screens does not match number of detected devices. Configuration failed. ddxSigGiveUp: Closing log Any idea how I can get Xorg -configure to work, so that I can have an xorg.conf file that I can edit to enable twinview? EDIT: Another way I could ask the same question to solve this problem is, why can't I boot with an xorg.conf file generated by nvidia-xconfig? Is there something in the generated xorg.conf file that might need editing?

    Read the article

  • PLEASE HELP RECOVER MY MINT14 BOOT/GRUB [closed]

    - by C2940680
    Hi I have following from [bootinfoscript][1] v0.61 [1Apr-2012]: I tried to do several time to do a boot-repair from YannUbuntu. However, I get error rebooting into my Linux Mint 14 Cinnamon. I have partitioned /boot, /, /home partitions. Could I still use /home partition if I recover files on to external USB and then reformatting the whole hard drive, repartition and use /home from USB drive which I have saved before? Also, I tried to install Qubes 2beta and then deleted the partition where it was stored. Also, also {my bad} I tried to copy the BOOT.CFG from sda6 to sda1 and sda2. All answers appreciated in advance. sda1: __________________________________________ File system: ext2 Boot sector type: - Boot sector info: Operating System: Boot files: /grub/grub.cfg sda2: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: swap Boot sector type: - Boot sector info: sda6: __________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Linux Mint 14 Nadia Boot files: /boot/grub/grub.cfg /etc/fstab

    Read the article

  • Error while loading shared libraries - libwebsock

    - by kittyPL
    Im trying to setup libwebsock, simple C websocket library. I followed the installation procedure from INSTALL file, everything went fine. Im able to compile test program given in the examples. But when I want to run my executable, wild error appears: ./echo: error while loading shared libraries: libwebsock.so.1: cannot open shared object file: No such file or directory I checked /usr/local/lib twice, libwebsock.so.1 exists and is doing very well. I also tried copying the lib to the echo folder (so its placed next to binary), still same error. It's quite funny for me: shadowz@Ubu:~/WebSocket$ ls echo echo.c echo.cpp libwebsock.so.1 shadowz@Ubu:~/WebSocket$ ./echo ./echo: error while loading shared libraries: libwebsock.so.1: cannot open shared object file: No such file or directory Any suggestions? Im running out of ideas...

    Read the article

  • How do I add a shapefile in ArcGIS via python scripting?

    - by Tom W
    I am trying to automate various tasks in ArcGIS Desktop (using ArcMap generally) with Python, and I keep needing a way to add a shape file to the current map. (And then do stuff to it, but that's another story). The best I can do so far is to add a layer file to the current map, using the following ("addLayer" is a layer file object): def AddLayerFromLayerFile(addLayer): import arcpy mxd = arcpy.mapping.MapDocument("CURRENT") df = arcpy.mapping.ListDataFrames(mxd, "Layers")[0] arcpy.mapping.AddLayer(df, addLayer, "AUTO_ARRANGE") arcpy.RefreshActiveView() arcpy.RefreshTOC() del mxd, df, addLayer However, my raw data is always going be shape files, so I need to be able to open them. (Equivantly: convert a shape file to a layer file wiothout opening it, but I'd prefer not to do that).

    Read the article

  • Puppet causes endless restarts of CUPS (how does one prevent this)

    - by Purfideas
    Hi! It makes sense, and is in fact suggested on this site, to have a critical file change trigger a service restart with puppet meta-parameters (such as notify or subscribe). For example: ## file definition for printers.conf file { "/etc/cups/printers.conf": [snip], source => "puppet:///module/etc/cups/printers.conf" } ## service definition for sshd service { 'cups': ensure => running, subscribe => File['/etc/cups/printers.conf'] } But in the case of CUPS, this triggers and endless loop of restarts; the logic works like this: Change puppetmaster's version of /etc/cups/printers.conf puppetmaster pushes new version to client, triggering cups restart cupsd restart insists on putting its own time stamp at the top of printers.conf, 'Written by cupsd...' This change will be seen as out of date, so after runinterval, we return to (1). Is there a way to suppress cupsd's need to time stamp the file? Or is there a puppet trick that could help here? Thanks!

    Read the article

  • setting up Ubuntu 10.10 as paravirtualized guest in Xen on RHEL5 host - what kernel?

    - by kostmo
    I've discovered the tool ubuntu-vm-builder, which I've installed and then invoked on an Ubuntu workstation as: sudo vmbuilder xen ubuntu --suite maverick --flavour virtual --arch amd64 --mem=512 --rootsize 8192 This workstation is not the intended target host of the virtual machine, however; I would like to host the guest on a Red Hat Enterprise Linux 5 machine that is running Xen 3.0.3. The output of this command appears to be a folder named ubuntu-xen containing three files: tmpXXXXXX, a very large file which I assume is the root partition image tmpYYYYYY, a somewhat large file which I assume is the swap partition image xen.conf, a text file I have copied the xen.conf file to the RHEL server's /etc/xen directory under the new name newvm, adjusting the paths of tempXXXXXX and tempYYYYYYin the file after also copying them from my local workstation to the RHEL server. When I launch the Virtual Machine Manager virt-manager, I can see the newvm virtual machine listed underneath the Dom0 machine. When I try to start newvm, I get the error: Error starting domain: virDomainCreate() failed POST operation failed: (xend.err 'Error creating domain: Kernel image does not exist: None') Indeed, there exists an entry kernel = 'None' in the xen.conf file. How do I find out what the path of the kernel should be? Is this path supposed to be to a kernel stored on the local filesystem of the RHEL5 host, or is it supposed to be a path inside the guest image? I see that the vmbuilder command provides for a --xen-kernel option, along with a --xen-ramdisk option, but I'm not sure what to use for either. I think I should be able to get this to work, since Ubuntu is said to be supported as a Xen guest, even though the Xen 4.0.1 docs state support for only a limited set of distributions, Ubuntu excluded. Update 1 When running vmbuilder on my local workstation, I did observe an output line saying: Calling hook: install_kernel and later, output lines saying: update-initramfs: Generating /boot/initrd.img-2.6.35-23-virtual [...] run-parts: executing /etc/kernel/postinst.d/initramfs-tools 2.6.35-23-virtual /boot/vmlinuz-2.6.35-23-virtual So in the xen.conf file, I tried setting the lines: kernel = '/boot/vmlinuz-2.6.35-23-virtual' ramdisk = '/boot/initrd.img-2.6.35-23-virtual' When trying to start the VM, I got an error similar to last time: Error starting domain: virDomainCreate() failed POST operation failed: (xend.err 'Error creating domain: Kernel image does not exist: /boot/vmlinuz-2.6.35-23-virtual') This makes me think that the RHEL5 machine is looking for local files, rather than a file within the binary guest disk image. After running sudo updatedb on my workstation, neither of those files were found. If the vmbuilder tool had tried to install them, it must have failed. Update 2 I was able to extract the kernel and initrd images from the guest disk binary by mounting it: mkdir mnt_tmp sudo mount ubuntu-xen/tmpXXXXXX mnt_tmp/ -o loop cp mnt_tmp/boot/vmlinuz-2.6.35-23-virtual virtual_kernel_ubuntu cp mnt_tmp/boot/initrd.img-2.6.35-23-virtual virtual_initrd_ubuntu These two files I copied to the RHEL5 server, and edited the xen.conf file to point to them as kernel and ramdisk. With this done, I could "run" the newvm virtual machine from within virt-manager, but was met with the message Console Not Configured For Guest when I double clicked the entry to open the Virtual Machine Console. As suggested by a forum, I then added the line vfb = [ 'type=vnc' ] to the configuration file, recreated the virtual machine (a ~10 min process), and this time got the message: Connecting to console for guest This remained indefinitely; after selecting View - Serial Console, I found a kernel panic: [5442621.272173] Kernel panic - not syncing: Attempted to kill the idle task! [5442621.272179] Pid: 0, comm: swapper Tainted: G D 2.6.35-23-virtual #41-Ubuntu [5442621.272184] Call Trace: [5442621.272191] [<ffffffff815a1b81>] panic+0x90/0x111 [5442621.272199] [<ffffffff810652ee>] do_exit+0x3be/0x3f0 [5442621.272204] [<ffffffff815a5e20>] oops_end+0xb0/0xf0 [5442621.272211] [<ffffffff8100ddeb>] die+0x5b/0x90 [5442621.272216] [<ffffffff815a56c4>] do_trap+0xc4/0x170 [5442621.272221] [<ffffffff8100ba35>] do_invalid_op+0x95/0xb0 [5442621.272227] [<ffffffff8130851c>] ? intel_idle+0xac/0x180 [5442621.272232] [<ffffffff810072bf>] ? xen_restore_fl_direct_end+0x0/0x1 [5442621.272239] [<ffffffff815a48fe>] ? _raw_spin_unlock_irqrestore+0x1e/0x30 [5442621.272247] [<ffffffff8108dfb7>] ? tick_broadcast_oneshot_control+0xc7/0x120 [5442621.272253] [<ffffffff8100ad5b>] invalid_op+0x1b/0x20 [5442621.272259] [<ffffffff8130851c>] ? intel_idle+0xac/0x180 [5442621.272264] [<ffffffff813084e0>] ? intel_idle+0x70/0x180 [5442621.272269] [<ffffffff810072bf>] ? xen_restore_fl_direct_end+0x0/0x1 [5442621.272275] [<ffffffff8148a147>] cpuidle_idle_call+0xa7/0x140 [5442621.272281] [<ffffffff81008d93>] cpu_idle+0xb3/0x110 [5442621.272286] [<ffffffff815873aa>] rest_init+0x8a/0x90 [5442621.272291] [<ffffffff81b04c9d>] start_kernel+0x387/0x390 [5442621.272297] [<ffffffff81b04341>] x86_64_start_reservations+0x12c/0x130 [5442621.272303] [<ffffffff81b08002>] xen_start_kernel+0x55d/0x561 Update 3 I tried an i386 architecture instead of amd64, but got the same kernel panic. Also, it seems the Virtual Machine Manager pays attention to the format of the filename of the kernel; for the same kernel binary, I tried simply naming it vmlinuz-virtual, which threw out an error box about an invalid kernel. When I named it vmlinuz-2.6.35-23-virtual, it did not throw the error, but it did still result in the kernel panic shortly thereafter.

    Read the article

  • Shutdown issue on persistent LiveUSB

    - by John K
    A) Downloaded Ubuntu to a Windows 7 temp directory from: http://www.ubuntu.com/download/ubuntu/download (Ubuntu 11.10 - Latest version / 32-bit). The output was a .iso file. B) Created a bootable USB stick using Universal USB Installer: http://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/. Did not select "Persistent file". Result: Tried it in a Dell Latitude D630 Dell laptop: works every time, many startups and shutdowns. C) Repeated B) but with "Persistent file" set. Works once or twice, but then, just locks on the Ubuntu splash screen. E) Downloaded LiveUSB Install from http://live.learnfree.eu/download, which created a file on Windows 7 called live-usb-install-2.3.2.exe. F) Ran the above installer with "Persistent file" to a 4GB ScanDisk (formatted) thumb drive. Result: Worked pretty good for a while. Shutdown and rebooted several times. Changed items like creating a new directories - all worked. Then: Setup an Admin account with a password and no auto login. On next reboot, it required the password and logged in correctly. Tried to shutdown via the top left icon - Shutdown menu option. Key issue: Would not shutdown, but would always go back to the login prompt. Could successfully login. Finally, shutdown (which just put me back to the login window), and hard shutdown with the power switch. Result: On reboot, just locks on the Ubuntu splash screen. Questions (note, very new to Linux): Did I shutdown wrong (I mean prior to the hard power off)? Is the persistence option very unstable, or am I doing something else completely wrong?

    Read the article

  • php include path problem:Same code works on Ubuntu default Apache and php conf, but not on CentOS

    - by Neo
    So the same code works on my ubuntu server but when I upload it to my dedicated hosting server running CentOS it seems to add an extra prefix of .:/usr/share/pear:/usr/share/php: I tried setting includepath to different things but it just doesn't work. the file is in a directory called language in the same folder as the file that is including it and I'm using : include dirname(FILE).DIRECTORY_SEPARATOR."language".DIRECTORY_SEPARATOR."storage.inc"; and include dirname(__FILE__)."/language/language.php"; and include "language/language.php"; and alot of other combinations but I can't get it to find the file. Fatal error: require_once() [function.require]: Failed opening required '/home/neo/public_html/migration/include/class/core/storage.inc' (include_path='.:/usr/share/pear:/usr/share/php:/home/neo/public_html/migration') in /home/neo/public_html/migration/include/class/core/class_lang.inc on line 153

    Read the article

  • ASP.NET MVC 2 and Windows Azure

    If you upgrade an Azure web instance to use ASP.NET MVC 2, make sure you mark the System.Web.Mvc reference as Copy Local = true.  Otherwise, your deployment will fail.  And you wont get any good feedback from Windows Azure as to the cause of the problem.  So youll start searching the web for help, and perhaps youll stumble on this post, and youll realize that you didnt set Copy Local = true on your System.Web.Mvc assembly reference in your ASP.NET MVC 2 web instance.  And youll  leave happy (or at least slightly happier) than when you came. That is all. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Compiling midnight commander

    - by notabene
    Hello, I need help with compiling midnight commander so that I can make some changes (for educational purposes). Or even creating the make files. After downloading latest version from git. I try to perform ./autogen.sh . Result is: maint/autopoint: 418: cannot open /usr/share/gettext/archive.tar.gz: No such file tar: This does not look like a tar archive tar: Exiting with failure status due to previous errors cvs checkout: cannot find module `archive' - ignored find: `archive': No such file or directory find: `archive': No such file or directory find: `archive': No such file or directory autopoint: *** infrastructure files for version 0.14.3 not found; this is autopoint from GNU gettext-tools 0.17 autopoint: *** Stop. I have installed gettext and folder /usr/share/gettext does exist. But there is no archive.tar.gz. I have no idea what should this archive contain or where to get it. Can you help me please?

    Read the article

  • Sources of NetBeans Gradle Plugin

    - by Geertjan
    Here is where you can find the sources of the latest and greatest NetBeans Gradle plugin: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.1/misc/GradleSupport To use it, download the sources above, open the sources into the IDE (which must be 7.1.1 or above), then you'll have a NetBeans module. Right-click it to run the module into a new instance of NetBeans IDE. In the Options window's Miscellaneous tab, there's a Gradle subtab for setting the Gradle location. In the New File dialog, in the Other category, you'll find a template named "Empty Gradle file". Make sure to name it "build" and to put it in the root directory of the application (by leaving the Folder field empty, you're specifying it should be created in the root directory). You'll then be able to expand the build.gradle file: Double-click a task to run it. When you open the file, it opens in the Groovy editor, if the Groovy editor is installed. When you make changes in the file, the list of tasks, shown above, is automatically recreated. It's at a really early stage of development and it would be great if developers out there would be interested in adding more features to it.

    Read the article

  • SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

    - by Pinal Dave
    [Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning. Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log. Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup. You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log. Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions. For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again. During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file? During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example. I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How to fix error in pdf2djvu: "Bogus memory allocation size"

    - by Tim
    I am using pdf2djvu to convert a pdf file into a djvu file, but got this error while trying to convert either bundled or indirect multi-page djvu file: $ pdf2djvu 1.pdf -o 1.djvu 1.pdf: - page #1 -> #1 Bogus memory allocation size $ pdf2djvu 1.pdf -i 1.djvu 1.pdf: - page #1 -> #1 Bogus memory allocation size I was wondering what is wrong here and how I shall fix the problem? You can suggest another application other than pdf2djvu. To convert it to djvu My pdf file can be downloaded from here , in case that you may wonder what is special about it. Thanks and regards

    Read the article

  • Objects won't render when Texture Compression + Mipmapping is Enabled

    - by felipedrl
    I'm optimizing my game and I've just implemented compressed (DXTn) texture loading in OpenGL. I've worked my way removing bugs but I can't figure out this one: objects w/ DXTn + mipmapped textures are not being rendered. It's not like they are appearing with a flat color, they just don't appear at all. DXTn textured objs render and mipmapped non-compressed textures render just fine. The texture in question is 256x256 I generate the mips all the way down 4x4, i.e 1 block. I've checked on gDebugger and it display all the levels (7) just fine. I'm using GL_LINEAR_MIPMAP_NEAREST for min filter and GL_LINEAR for mag one. The texture is being compressed and mipmaps being created offline with Paint.NET tool using super sampling method. (I also tried bilinear just in case) Source follow: [SNIPPET 1: Loading DDS into sys memory + Initializing Object] // Read header DDSHeader header; file.read(reinterpret_cast<char*>(&header), sizeof(DDSHeader)); uint pos = static_cast<uint>(file.tellg()); file.seekg(0, std::ios_base::end); uint dataSizeInBytes = static_cast<uint>(file.tellg()) - pos; file.seekg(pos, std::ios_base::beg); // Read file data mData = new unsigned char[dataSizeInBytes]; file.read(reinterpret_cast<char*>(mData), dataSizeInBytes); file.close(); mMipmapCount = header.mipmapcount; mHeight = header.height; mWidth = header.width; mCompressionType = header.pf.fourCC; // Only support files divisible by 4 (for compression blocks algorithms) massert(mWidth % 4 == 0 && mHeight % 4 == 0); massert(mCompressionType == NO_COMPRESSION || mCompressionType == COMPRESSION_DXT1 || mCompressionType == COMPRESSION_DXT3 || mCompressionType == COMPRESSION_DXT5); // Allow textures up to 65536x65536 massert(header.mipmapcount <= MAX_MIPMAP_LEVELS); mTextureFilter = TextureFilter::LINEAR; if (mMipmapCount > 0) { mMipmapFilter = MipmapFilter::NEAREST; } else { mMipmapFilter = MipmapFilter::NO_MIPMAP; } mBitsPerPixel = header.pf.bitcount; if (mCompressionType == NO_COMPRESSION) { if (header.pf.flags & DDPF_ALPHAPIXELS) { // The only format supported w/ alpha is A8R8G8B8 massert(header.pf.amask == 0xFF000000 && header.pf.rmask == 0xFF0000 && header.pf.gmask == 0xFF00 && header.pf.bmask == 0xFF); mInternalFormat = GL_RGBA8; mFormat = GL_BGRA; mDataType = GL_UNSIGNED_BYTE; } else { massert(header.pf.rmask == 0xFF0000 && header.pf.gmask == 0xFF00 && header.pf.bmask == 0xFF); mInternalFormat = GL_RGB8; mFormat = GL_BGR; mDataType = GL_UNSIGNED_BYTE; } } else { uint blockSizeInBytes = 16; switch (mCompressionType) { case COMPRESSION_DXT1: blockSizeInBytes = 8; if (header.pf.flags & DDPF_ALPHAPIXELS) { mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT1_EXT; } else { mInternalFormat = GL_COMPRESSED_RGB_S3TC_DXT1_EXT; } break; case COMPRESSION_DXT3: mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT; break; case COMPRESSION_DXT5: mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT; break; default: // Not Supported (DXT2, DXT4 or any compression format) massert(false); } } [SNIPPET 2: Uploading into video memory] massert(mData != NULL); glGenTextures(1, &mHandle); massert(mHandle!=0); glBindTexture(GL_TEXTURE_2D, mHandle); commitFiltering(); uint offset = 0; Renderer* renderer = Renderer::getInstance(); switch (mInternalFormat) { case GL_RGB: case GL_RGBA: case GL_RGB8: case GL_RGBA8: for (uint i = 0; i < mMipmapCount + 1; ++i) { uint width = std::max(1U, mWidth >> i); uint height = std::max(1U, mHeight >> i); glTexImage2D(GL_TEXTURE_2D, i, mInternalFormat, width, height, mHasBorder, mFormat, mDataType, &mData[offset]); offset += width * height * (mBitsPerPixel / 8); } break; case GL_COMPRESSED_RGB_S3TC_DXT1_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT1_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT3_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT5_EXT: { uint blockSize = 16; if (mInternalFormat == GL_COMPRESSED_RGB_S3TC_DXT1_EXT || mInternalFormat == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT) { blockSize = 8; } uint width = mWidth; uint height = mHeight; for (uint i = 0; i < mMipmapCount + 1; ++i) { uint nBlocks = ((width + 3) / 4) * ((height + 3) / 4); // Only POT textures allowed for mipmapping massert(width % 4 == 0 && height % 4 == 0); glCompressedTexImage2D(GL_TEXTURE_2D, i, mInternalFormat, width, height, mHasBorder, nBlocks * blockSize, &mData[offset]); offset += nBlocks * blockSize; if (width <= 4 && height <= 4) { break; } width = std::max(4U, width / 2); height = std::max(4U, height / 2); } break; } default: // Not Supported massert(false); } Also I don't understand the "+3" in the block size computation but looking for a solution for my problema I've encountered people defining it as that. I guess it won't make a differente for POT textures but I put just in case. Thanks.

    Read the article

  • I am getting an error while writing my first app as shown in developer get-started tutorial

    - by TrickyJ
    I am trying to learn to develop apps in ubuntu and currently i am going through this tutorial. As shown in the video I am writing the below given codes: self.refreshbutton = self.builder.get_object("refreshbutton") def on_refreshbutton_clicked(self, widget): print "Refresh" As soon as I try to run my application it is giving me an error : I type this command to run my application : quickly run (trickybrowser:4418): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1971:11: Not using units is deprecated. Assuming 'px'. (trickybrowser:4418): Gtk-WARNING **: Failed to parse /usr/share/themes/mac-os-lion-theme-v2/gtk-3.0/settings.ini: Key file contains line '/* ' which is not a key-value pair, group, or comment Traceback (most recent call last): File "bin/trickybrowser", line 32, in <module> import trickybrowser File "/home/tricky/trickybrowser/trickybrowser/__init__.py", line 14, in <module> from trickybrowser import TrickybrowserWindow File "/home/tricky/trickybrowser/trickybrowser/TrickybrowserWindow.py", line 32 print "Refresh" ^ IndentationError: expected an indented block

    Read the article

  • Friday Stats

    - by jjg
    As some of you may have noticed, we've recently opened a new repository in the Code Tools project for small utilities which can be used to gather info about the OpenJDK code base and builds. 1 The latest addition is a utility for analyzing the class file versions in a collection of class files. I've posted an example set of results from analyzing the class files in an OpenJDK build on Linux. 2. Most of the files are version 52 files as you would expect, but there is a surprising number of version 51 and 50 files, as well as a handful of v45.3 files as well. Digging deeper, it turns out that Nashorn is still using version 51 class files, and the Serviceability Agent is still using version 50 class files and one 45.3 class file, leaving the remainder of the 45.3 class files coming from RMI. For more info on the different class file versions, see Joe Darcy's class file version decoder rIng. Thanks to Stuart Marks for planting the seed for the class file version tool. See the project page, repo, and mail archive. http://cr.openjdk.java.net/~jjg/cfv-summary/open/

    Read the article

< Previous Page | 769 770 771 772 773 774 775 776 777 778 779 780  | Next Page >