Search Results

Search found 13619 results on 545 pages for 'memory mapped'.

Page 179/545 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • Configuring ASP.NET MVC ActionLink format with GoDaddy shared hosting

    - by Maxim Z.
    Background I have a GoDaddy shared Windows hosting plan and I'm running into a small issue with multiple domains. Many people have previously reported such an issue, but I am not interested in trying to resolve that problem altogether; all I want to accomplish is to change the format of my ActionLinks. Issue Let's say the domain that is mapped to my root hosting directory is example.com. GoDaddy forces mapping of other domains to subdirectories of the root. For example, my second domain, example1.com, is mapped to example.com/example1. I uploaded my ASP.NET MVC site to such a subdirectory, only to find that ActionLinks that are for navigation have the following format: http://example1.com/example1/Controller/Action In other words, even when I use the domain that is mapped to the subdirectory, the subdirectory is still used in the URL. However, I noticed that I can also access the same path by going to: http://example1.com/Controller/Action (leaving out the subdirectory) What I want to achieve I want to have my ActionLinks automatically drop the subdirectory, as it is not required. Is this possible without changing the ActionLinks into plain-old URLs?

    Read the article

  • Best practices for class-mapping with SoapClient

    - by Foofy
    Using SoapClient's class mapping feature and it's pretty sweet. Unfortunately the SOAP service we're using has a bunch of read-only properties on some of the objects and will throw faults if the properties are passed back as anything but null. Need to filter out the properties before they're used in the SOAP call and am looking for advice on the best way to do it. So far the options are: Stick to a convention where I use getter and setter functions to manipulate the properties, and use property overloading to filter method access since only SoapClient would be doing that. E.g. developers would access properties like this: $obj->getAccountNumber() SoapClient would access properties like this: $obj->accountNumber I don't like this because the properties are still exposed and things could go wrong if developers don't stick to convention. Have a wrapper for SoapClient that sets a public property the mapped objects can check to see if the property is being accessed by SoapClient. I already have a wrapper that assigns a reference to itself to all the mapped objects. class SoapClientWrapper { public function __soapCall($method, $args) { $this->setSoapMode(true); $this->_soapClient->__soapCall($method, $args); $this->setSoapMode(false); } } class Invoice { function __get($val) { if($this->_soapClient->getSoapMode()) { return null; } else { return $this->$val; } } } This works but it doesn't feel right and seems a bit clunky. Do the mapping manually, and don't use SoapClient's mapping features. I'd just have a function on all the mapped objects that returns the safe-to-send properties. Also, nobody would have access to properties they shouldn't since I could enforce getters and setters. A lot more work, though.

    Read the article

  • What file format can represent an uncompressed raster image at 48 or 64 bits per pixel?

    - by finnw
    I am creating screenshots under Windows and using the LockBits function from GDI+ to extract the pixel data, which will then be written to a file. To maximise performance I am also: Using the same PixelFormat as the source bitmap, to avoid format conversion Using the ImageLockModeUserInputBuf flag to extract the pixel data into a pre-allocated buffer This pre-allocated buffer (pointed to by BitmapData::Scan0) is part of a memory-mapped file (to avoid copying the pixel data again.) I will also be writing the code that reads the file, so I can use (or invent) any format I wish. However I would prefer to use a well-known format that existing programs (ideally web browsers) are able to read, because that means I can visually confirm that the images are correct before writing the code for the other program (that reads the image.) I have implemented this successfully for the PixelFormat32bppRGB format, which matches the format of a 32bpp BMP file, so if I extract the pixel data directly into the memory-mapped BMP file and prefix it with a BMP header I get a valid BMP image file that can be opened in Paint and most browsers. Unfortunately one of the machines I am testing on returns pixels in PixelFormat64bppPARGB format (presumably this is influenced by the video adapter driver) and there is no corresponding BMP pixel format for this. Converting to a 16, 24 or 32bpp BMP format slows the program down considerably (as well as being lossy) so I am looking for a file format that can use this pixel format without conversion, so I can extract directly into the memory-mapped file as I have done with the 32bpp format. What raster image file formats support 48bpp and/or 64bpp?

    Read the article

  • EF4 and multiple abstract levels

    - by Cedric
    I need to use inheritance with EF4 and the TPH model created from DB. I created a new projet to test simples classes. There is my class model: There is my table in SQL SERVER 2008 : VEHICLE ID : int PK Owner : varchar(50) Consumption : float FirstCirculationDate : date Type : varchar(50) Discriminator : varchar(10) I added a condition in my EDMX on the Discriminator field to differentiate the Scooter, Car, Motorbike and Bike entities. MotorizedVehicle and Vehicle are Abstract. But when I compile, this error appears : Error 3032: Problem in mapping fragments starting at lines 78, 85:EntityTypes EF4InheritanceModel.Scooter, EF4InheritanceModel.Motorbike, EF4InheritanceModel.Car, EF4InheritanceModel.Bike are being mapped to the same rows in table Vehicle. Mapping conditions can be used to distinguish the rows that these types are mapped to. Edit : To Ladislav : I try it and error change to become it for all of my entities : Error 3034: Problem in mapping fragments starting at lines 72, 86:An entity is mapped to different rows within the same table. Ensure these two mapping fragments do not map two groups of entities with overlapping keys to two distinct groups of rows. To Henk (with Ladislay suggestion) : There are all of mappings details : What's wrong ? Thanks

    Read the article

  • mount not working properly on Cygwin

    - by Code Dance
    I have WinXP box and Cygwin installed on it. There are many network drive mapped on windows. when I execute mount command on windows (which uses the same mount executable as Cygwin) a get list of network mapped drives. But same when I do through Cygwin, I see only C: is mapped. On Windows command prompt. C:\CodeDance mount C:\cygwin\bin on /usr/bin type system (textmode) C:\cygwin\lib on /usr/lib type system (textmode) C:\cygwin on / type system (textmode) c:\Own on /own type system (binmode) v: on /cygdrive/v type system (binmode) c: on /cygdrive/c type system (textmode,noumount) k: on /cygdrive/k type system (textmode,noumount) l: on /cygdrive/l type system (textmode,noumount) m: on /cygdrive/m type system (textmode,noumount) o: on /cygdrive/o type system (textmode,noumount) x: on /cygdrive/x type system (textmode,noumount) y: on /cygdrive/y type system (textmode,noumount) z: on /cygdrive/z type system (textmode,noumount) Cygwin, on bash code@DANCE /cygdrive $ mount C:\cygwin\bin on /usr/bin type system (textmode) C:\cygwin\lib on /usr/lib type system (textmode) C:\cygwin on / type system (textmode) c:\Own on /own type system (binmode) v: on /cygdrive/v type system (binmode) c: on /cygdrive/c type system (textmode,noumount) The /cygdrive/v that shown mounted above is not accessible either.

    Read the article

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • Access violations in strange places when using Windows file dialogs

    - by Robert Oschler
    A long time ago I found out that I was getting access violations in my code due to the use of the Delphi Open File and/or Save File dialogs, which encapsulate the Windows dialogs. I asked some questions on a few forums and I was told that it may have been due to the way some programs add hooks to the shell system that result in DLLs getting injected in every process, some of which can cause havoc with a program. For the record, the programming environment I use is Delphi 6 Professional running on Windows XP 32-bit. At the time I got around it by not using Delphi's Dialog components and instead calling straight into comdlg32.dll. This solved the problem wonderfully. Today I was working with memory mapped files for the first time and sure enough, access violations started cropping up in weird parts of the code. I tried my comdlg32.dll direct calls and this time it didn't help. To isolate the problem as a test I created a list box with the exact same files I was using during testing. These are the exact same test files I was selecting from an Open File dialog and then launching my memory mapped file with. I set things up so that by clicking on a file in the list box, I would use that file in my memory mapped file test instead of calling into a comdlg32.dll dialog function to select a test file. Again, the access violatons vanished. To show you how dramatic a fix it was I went from experiencing an access violation within 1 to 3 trials to none at all. Unfortunately, it's going to bite me later on of course when I do need to use file dialogs. Has anyone else dealt with this issue too and found the real culprit? Did any of you find a solution I could use to fix this problem instead of dancing around it like I am now? Thanks in advance.

    Read the article

  • pushd - handling multiple drives from cmd

    - by user673600
    I'm trying to figure out how to install some programs where the components reside on two different drives on a networked path. However whenever I use pushd \\xyz\c$ I get a mapped drive which means I cannot use any knowledge of using for example c:\install e:\mycomponents.dll. Is there anyway that I can do this once I have used the pushd command? How can I ensure that I keep the drives the same for example. I'm in the process of installing services. So it seems that when I install the service, I need to keep the path as the same as the actual location of the .exe which means that I'm running into issues. Is there a way to simply use pushd but at the sametime not actually map drives? As when installing services, when I've been using net use, I've found that there is an issue with installing on drives which are mapped, as the service whilst can be installed doesn't find the actual .exe when it comes to starting up the service. So to expand this, is there a way to solve this using net use or pushd or a combination that lets me install a service as such: c:\windows\..\installutil e:\mynode? So to clarify, I need to somehow be able to see both drives on the remote machine by their relative drives i.e. E:\ and C:\ - if I use a mapped drive letter then it means installing service is a pain because I cannot use the path.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Windows Azure: Announcing Support for Windows Server 2012 R2 + Some Nice Price Cuts

    - by ScottGu
    Today we released some great updates to Windows Azure: Virtual Machines: Support for Windows Server 2012 R2 Cloud Services: Support for Windows Server 2012 R2 and .NET 4.5.1 Windows Azure Pack: Use Windows Azure features on-premises using Windows Server 2012 R2 Price Cuts: Up to 22% Price Reduction on Memory-Intensive Instances Below are more details about each of the improvements: Virtual Machines: Support for Windows Server 2012 R2 This morning we announced the release of Windows Server 2012 R2 – which is a fantastic update to Windows Server and includes a ton of great enhancements. This morning we are also excited to announce that the general availability image of Windows Server 2012 RC is now supported on Windows Azure.  Windows Azure is the first cloud provider to offer the final release of Windows Server 2012 R2, and it is incredibly easy to launch your own Windows Server 2012 R2 instance with it. To create a new Windows Server 2012 R2 instance simply choose New->Compute->Virtual Machine within the Windows Azure Management Portal.  You can select the “Windows Server 2012 R2” image and create a new Virtual Machine using the “Quick Create” option: Or alternatively click the “From Gallery” option if you want to customize even more configuration options (endpoints, remote powershell, availability set, etc): Creating and instantiating a new Virtual Machine on Windows Azure is very fast.  In fact, the Windows Server 2012 R2 image now deploys and runs 30% faster than previous versions of Windows Server. Once the VM is deployed you can drill into it to track its health and manage its settings: Clicking the “Connect” button allows you to remote desktop into the VM – at which point you can customize and manage it as a full administrator however you want: If you haven’t tried Windows Server 2012 R2 yet – give it a try with Windows Azure.  There is no easier way to get an instance of it up and running! Cloud Services: Support for using Windows Server 2012 R2 with Web and Worker Roles Today’s Windows Azure release also allows you to now use Windows Server 2012 R2 and .NET 4.5.1 within Web and Worker Roles within Cloud Service based applications.  Enabling this is easy.  You can configure existing existing Cloud Service application to use Windows Server 2012 R2 by updating your Cloud Service Configuration File (.cscfg) to use the new “OS Family 4” setting: Or alternatively you can use the Windows Azure Management Portal to update cloud services that are already deployed on Windows Azure.  Simply choose the configure tab on them and select Windows Server 2012 R2 in the Operating System Family dropdown: The approaches above enable you to immediately take advantage of Windows Server 2012 R2 and .NET 4.5.1 and all the great features they provide. Windows Azure Pack: Use Windows Azure features on Windows Server 2012 R2 Today we also made generally available the Windows Azure Pack, which is a free download that enables you to run Windows Azure Technology within your own datacenter, an on-premises private cloud environment, or with one of our service provider/hosting partners who run Windows Server. Windows Azure Pack enables you to use a management portal that has the exact same UI as the Windows Azure Management Portal, and within which you can create and manage Virtual Machines, Web Sites, and Service Bus – all of which can run on Windows Server and System Center.  The services provided with the Windows Azure Pack are consistent with the services offered within our Windows Azure public cloud offering.  This consistency enables organizations and developers to build applications and solutions that can run in any hosting environment – and which use the same development and management approach.  The end result is an offering with incredible flexibility. You can learn more about Windows Azure Pack and download/deploy it today here. Price Cuts: Up to 22% Reduction on Memory Intensive Instances Today we are also reducing prices by up to 22% on our memory-intensive VM instances (specifically our A5, A6, and A7 instances).  These price reductions apply to both Windows and Linux VM instances, as well as for Cloud Service based applications: These price reductions will take effect in November, and will enable you to run applications that demand larger memory (such as SharePoint, Databases, in-memory analytics, etc) even more cost effectively. Summary Today’s release enables you to start using Windows Server 2012 R2 within Windows Azure immediately, and take advantage of our Cloud OS vision both within our datacenters – and using the Windows Azure Pack within both your existing datacenters and those of our partners. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Diagnose PC Hardware Problems with an Ubuntu Live CD

    - by Trevor Bekolay
    So your PC randomly shuts down or gives you the blue screen of death, but you can’t figure out what’s wrong. The problem could be bad memory or hardware related, and thankfully the Ubuntu Live CD has some tools to help you figure it out. Test your RAM with memtest86+ RAM problems are difficult to diagnose—they can range from annoying program crashes, or crippling reboot loops. Even if you’re not having problems, when you install new RAM it’s a good idea to thoroughly test it. The Ubuntu Live CD includes a tool called Memtest86+ that will do just that—test your computer’s RAM! Unlike many of the Live CD tools that we’ve looked at so far, Memtest86+ has to be run outside of a graphical Ubuntu session. Fortunately, it only takes a few keystrokes. Note: If you used UNetbootin to create an Ubuntu flash drive, then memtest86+ will not be available. We recommend using the Universal USB Installer from Pendrivelinux instead (persistence is possible with Universal USB Installer, but not mandatory). Boot up your computer with a Ubuntu Live CD or USB drive. You will be greeted with this screen: Use the down arrow key to select the Test memory option and hit Enter. Memtest86+ will immediately start testing your RAM. If you suspect that a certain part of memory is the problem, you can select certain portions of memory by pressing “c” and changing that option. You can also select specific tests to run. However, the default settings of Memtest86+ will exhaustively test your memory, so we recommend leaving the settings alone. Memtest86+ will run a variety of tests that can take some time to complete, so start it running before you go to bed to give it adequate time. Test your CPU with cpuburn Random shutdowns – especially when doing computationally intensive tasks – can be a sign of a faulty CPU, power supply, or cooling system. A utility called cpuburn can help you determine if one of these pieces of hardware is the problem. Note: cpuburn is designed to stress test your computer – it will run it fast and cause the CPU to heat up, which may exacerbate small problems that otherwise would be minor. It is a powerful diagnostic tool, but should be used with caution. Boot up your computer with a Ubuntu Live CD or USB drive, and choose to run Ubuntu from the CD or USB drive. When the desktop environment loads up, open the Synaptic Package Manager by clicking on the System menu in the top-left of the screen, then selecting Administration, and then Synaptic Package Manager. Cpuburn is in the universe repository. To enable the universe repository, click on Settings in the menu at the top, and then Repositories. Add a checkmark in the box labeled “Community-maintained Open Source software (universe)”. Click close. In the main Synaptic window, click the Reload button. After the package list has reloaded and the search index has been rebuilt, enter “cpuburn” in the Quick search text box. Click the checkbox in the left column, and select Mark for Installation. Click the Apply button near the top of the window. As cpuburn installs, it will caution you about the possible dangers of its use. Assuming you wish to take the risk (and if your computer is randomly restarting constantly, it’s probably worth it), open a terminal window by clicking on the Applications menu in the top-left of the screen and then selection Applications > Terminal. Cpuburn includes a number of tools to test different types of CPUs. If your CPU is more than six years old, see the full list; for modern AMD CPUs, use the terminal command burnK7 and for modern Intel processors, use the terminal command burnP6 Our processor is an Intel, so we ran burnP6. Once it started up, it immediately pushed the CPU up to 99.7% total usage, according to the Linux utility “top”. If your computer is having a CPU, power supply, or cooling problem, then your computer is likely to shutdown within ten or fifteen minutes. Because of the strain this program puts on your computer, we don’t recommend leaving it running overnight – if there’s a problem, it should crop up relatively quickly. Cpuburn’s tools, including burnP6, have no interface; once they start running, they will start driving your CPU until you stop them. To stop a program like burnP6, press Ctrl+C in the terminal window that is running the program. Conclusion The Ubuntu Live CD provides two great testing tools to diagnose a tricky computer problem, or to stress test a new computer. While they are advanced tools that should be used with caution, they’re extremely useful and easy enough that anyone can use them. Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDCreate a Persistent Bootable Ubuntu USB Flash DriveAdding extra Repositories on UbuntuHow to Share folders with your Ubuntu Virtual Machine (guest)Building a New Computer – Part 3: Setting it Up TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • List of Commonly Used Value Types in XNA Games

    - by Michael B. McLaughlin
    Most XNA programmers are concerned about generating garbage. More specifically about allocating GC-managed memory (GC stands for “garbage collector” and is both the name of the class that provides access to the garbage collector and an acronym for the garbage collector (as a concept) itself). Two of the major target platforms for XNA (Windows Phone 7 and Xbox 360) use variants of the .NET Compact Framework. On both variants, the GC runs under various circumstances (Windows Phone 7 and Xbox 360). Of concern to XNA programmers is the fact that it runs automatically after a fixed amount of GC-managed memory has been allocated (currently 1MB on both systems). Many beginning XNA programmers are unaware of what constitutes GC-managed memory, though. So here’s a quick overview. In .NET, there are two different “types” of types: value types and reference types. Only reference types are managed by the garbage collector. Value types are not managed by the garbage collector and are instead managed in other ways that are implementation dependent. For purposes of XNA programming, the important point is that they are not managed by the GC and thus do not, by themselves, increment that internal 1 MB allocation counter. (n.b. Structs are value types. If you have a struct that has a reference type as a member, then that reference type, when instantiated, will still be allocated in the GC-managed memory and will thus count against the 1 MB allocation counter. Putting it in a struct doesn’t change the fact that it gets allocated on the GC heap, but the struct itself is created outside of the GC’s purview). Both value types and reference types use the keyword ‘new’ to allocate a new instance of them. Sometimes this keyword is hidden by a method which creates new instances for you, e.g. XmlReader.Create. But the important thing to determine is whether or not you are dealing with a value types or a reference type. If it’s a value type, you can use the ‘new’ keyword to allocate new instances of that type without incrementing the GC allocation counter (except as above where it’s a struct with a reference type in it that is allocated by the constructor, but there are no .NET Framework or XNA Framework value types that do this so it would have to be a struct you created or that was in some third-party library you were using for that to even become an issue). The following is a list of most all of value types you are likely to use in a generic XNA game: AudioCategory (used with XACT; not available on WP7) AvatarExpression (Xbox 360 only, but exposed on Windows to ease Xbox development) bool BoundingBox BoundingSphere byte char Color DateTime decimal double any enum (System.Enum itself is a class, but all enums are value types such that there are no GC allocations for enums) float GamePadButtons GamePadCapabilities GamePadDPad GamePadState GamePadThumbSticks GamePadTriggers GestureSample int IntPtr (rarely but occasionally used in XNA) KeyboardState long Matrix MouseState nullable structs (anytime you see, e.g. int? something, that ‘?’ denotes a nullable struct, also called a nullable type) Plane Point Quaternion Ray Rectangle RenderTargetBinding sbyte (though I’ve never seen it used since most people would just use a short) short TimeSpan TouchCollection TouchLocation TouchPanelCapabilities uint ulong ushort Vector2 Vector3 Vector4 VertexBufferBinding VertexElement VertexPositionColor VertexPositionColorTexture VertexPositionNormalTexture VertexPositionTexture Viewport So there you have it. That’s not quite a complete list, mind you. For example: There are various structs in the .NET framework you might make use of. I left out everything from the Microsoft.Xna.Framework.Graphics.PackedVector namespace, since everything in there ventures into the realm of advanced XNA programming anyway (n.b. every single instantiable thing in that namespace is a struct and thus a value type; there are also two interfaces but interfaces cannot be instantiated at all and thus don’t figure in to this discussion). There are so many enums you’re likely to use (PlayerIndex, SpriteSortMode, SpriteEffects, SurfaceFormat, etc.) that including them would’ve flooded the list and reduced its utility. So I went with “any enum” and trust that you can figure out what the enums are (and it’s rare to use ‘new’ with an enum anyway). That list also doesn’t include any of the pre-defined static instances of some of the classes (e.g. BlendState.AlphaBlend, BlendState.Opaque, etc.) which are already allocated such that using them doesn’t cause any new allocations and therefore doesn’t increase that 1 MB counter. That list also has a few misleading things. VertexElement, VertexPositionColor, and all the other vertex types are structs. But you’re only likely to ever use them as an array (for use with VertexBuffer or DynamicVertexBuffer), and all arrays are reference types (even arrays of value types such as VertexPositionColor[ ] or int[ ]). * So that’s it for now. The note below may be a bit confusing (it deals with how the GC works and how arrays are managed in .NET). If so, you can probably safely ignore it for now but feel free to ask any questions regardless. * Arrays of value types (where the value type doesn’t contain any reference type members) are much faster for the GC to examine than arrays of reference types, so there is a definite benefit to using arrays of value types where it makes sense. But creating arrays of value types does cause the GC’s allocation counter to increase. Indeed, allocating a large array of a value type is one of the quickest ways to increment the allocation counter since a .NET array is a sequential block of memory. An array of reference types is just a sequential block of references (typically 4 bytes each) while an array of value types is a sequential block of instances of that type. So for an array of Vector3s it would be 12 bytes each since each float is 4 bytes and there are 3 in a Vector3; for an array of VertexPositionNormalTexture structs it would typically be 32 bytes each since it has two Vector3s and a Vector2. (Note that there are a few additional bytes taken up in the creation of an array, typically 12 but sometimes 16 or possibly even more, which depend on the implementation details of the array type on the particular platform the code is running on).

    Read the article

  • OS Analytics with Oracle Enterprise Manager (by Eran Steiner)

    - by Zeynep Koch
    Oracle Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. The recording of our call to discuss this blog is available here: https://oracleconferencing.webex.com/oracleconferencing/ldr.php?AT=pb&SP=MC&rID=71517797&rKey=4ec9d4a3508564b3Download the presentation here See also: Blog about Alert Monitoring and Problem Notification Blog about Using Operational Profiles to Install Packages and other content Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data Drill down into a process details Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • Profiling NetBeans 7.0 Beta 2 and Reporting Problems

    - by christopher.jones
    With NetBeans 7.0 recently going into Beta 2 phase, now is the time to test it out properly and report issues. The development team has been squashing bugs, including memory issues with the PHP bundle.There are some great new PHP related features in NetBeans 7.0, so you know you want to try it out.If you identify something wrong with NetBeans, please report it following the guidelines http://wiki.netbeans.org/IssueReportingGuidelinesDepending on the issues, data to attach to the report is mentioned on: http://wiki.netbeans.org/FaqLogMessagesFile and http://wiki.netbeans.org/FaqProfileMeNowIf you have a memory issue then a memory dump would also be useful. Run the jmap tool for this. There is some background information on http://wiki.netbeans.org/FaqMemoryDump. Here's how I used it.First I set my environment to match the JDK used by NetBeans. In my case I am using a nightly build so the JDK is in the configuration file under $HOME/netbeans-dev-201102210501:$ egrep netbeans_jdkhome $HOME/netbeans-dev-201102210501/etc/netbeans.conf netbeans_jdkhome="/home/cjones/src/jdk1.6.0_24" $ export JAVA_HOME=/home/cjones/src/jdk1.6.0_24 $ export PATH=$JAVA_HOME/bin:$PATH Next, I found the correct process number to examine:$ ps -ef | egrep 'netbeans|jdk'cjones   23230     1  0 16:07 ?        00:00:00 /bin/bash /home/cjones/netbeans-cjones   23438 23230  2 16:07 ?        00:00:09 /home/cjones/src/jdk1.6.0_24/binFinally I used the parent JDK process as the jmap argument:$ jmap -histo:live 23438 num     #instances         #bytes  class name----------------------------------------------   1:         12075        9028656  [I   2:         49535        6581920  <constMethodKlass>   3:         49535        3964128  <methodKlass>   4:         80256        3840776  <symbolKlass>   5:         36093        3635336  [C   6:          5095        3341312  <constantPoolKlass>   7:          5095        2486016  <instanceKlassKlass>   8:          4325        1961432  <constantPoolCacheKlass>   9:         18729        1763976  [B  10:         59952        1438848  java.util.HashMap$Entry  . . .This histogram memory report will help identify the kind of memory issues you are seeing. It may not be as complete as an often tens of megabyte jmap -dump:live,file=/tmp/nbheap.log 23438 heap dump, but is much more easily attached to a bug report.If you want to keep up to date with NetBeans, nightly builds are at: http://bits.netbeans.org/download/trunk/nightly/latest/zip/

    Read the article

  • Little more help with writing a o buffer with libjpeg

    - by Richard Knop
    So I have managed to find another question discussing how to use the libjpeg to compress an image to jpeg. I have found this code which is supposed to work: Compressing IplImage to JPEG using libjpeg in OpenCV Here's the code (it compiles ok): /* This a custom destination manager for jpeglib that enables the use of memory to memory compression. See IJG documentation for details. */ typedef struct { struct jpeg_destination_mgr pub; /* base class */ JOCTET* buffer; /* buffer start address */ int bufsize; /* size of buffer */ size_t datasize; /* final size of compressed data */ int* outsize; /* user pointer to datasize */ int errcount; /* counts up write errors due to buffer overruns */ } memory_destination_mgr; typedef memory_destination_mgr* mem_dest_ptr; /* ------------------------------------------------------------- */ /* MEMORY DESTINATION INTERFACE METHODS */ /* ------------------------------------------------------------- */ /* This function is called by the library before any data gets written */ METHODDEF(void) init_destination (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->pub.next_output_byte = dest->buffer; /* set destination buffer */ dest->pub.free_in_buffer = dest->bufsize; /* input buffer size */ dest->datasize = 0; /* reset output size */ dest->errcount = 0; /* reset error count */ } /* This function is called by the library if the buffer fills up I just reset destination pointer and buffer size here. Note that this behavior, while preventing seg faults will lead to invalid output streams as data is over- written. */ METHODDEF(boolean) empty_output_buffer (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->pub.next_output_byte = dest->buffer; dest->pub.free_in_buffer = dest->bufsize; ++dest->errcount; /* need to increase error count */ return TRUE; } /* Usually the library wants to flush output here. I will calculate output buffer size here. Note that results become incorrect, once empty_output_buffer was called. This situation is notified by errcount. */ METHODDEF(void) term_destination (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->datasize = dest->bufsize - dest->pub.free_in_buffer; if (dest->outsize) *dest->outsize += (int)dest->datasize; } /* Override the default destination manager initialization provided by jpeglib. Since we want to use memory-to-memory compression, we need to use our own destination manager. */ GLOBAL(void) jpeg_memory_dest (j_compress_ptr cinfo, JOCTET* buffer, int bufsize, int* outsize) { mem_dest_ptr dest; /* first call for this instance - need to setup */ if (cinfo->dest == 0) { cinfo->dest = (struct jpeg_destination_mgr *) (*cinfo->mem->alloc_small) ((j_common_ptr) cinfo, JPOOL_PERMANENT, sizeof (memory_destination_mgr)); } dest = (mem_dest_ptr) cinfo->dest; dest->bufsize = bufsize; dest->buffer = buffer; dest->outsize = outsize; /* set method callbacks */ dest->pub.init_destination = init_destination; dest->pub.empty_output_buffer = empty_output_buffer; dest->pub.term_destination = term_destination; } /* ------------------------------------------------------------- */ /* MEMORY SOURCE INTERFACE METHODS */ /* ------------------------------------------------------------- */ /* Called before data is read */ METHODDEF(void) init_source (j_decompress_ptr dinfo) { /* nothing to do here, really. I mean. I'm not lazy or something, but... we're actually through here. */ } /* Called if the decoder wants some bytes that we cannot provide... */ METHODDEF(boolean) fill_input_buffer (j_decompress_ptr dinfo) { /* we can't do anything about this. This might happen if the provided buffer is either invalid with regards to its content or just a to small bufsize has been given. */ /* fail. */ return FALSE; } /* From IJG docs: "it's not clear that being smart is worth much trouble" So I save myself some trouble by ignoring this bit. */ METHODDEF(void) skip_input_data (j_decompress_ptr dinfo, INT32 num_bytes) { /* There might be more data to skip than available in buffer. This clearly is an error, so screw this mess. */ if ((size_t)num_bytes > dinfo->src->bytes_in_buffer) { dinfo->src->next_input_byte = 0; /* no buffer byte */ dinfo->src->bytes_in_buffer = 0; /* no input left */ } else { dinfo->src->next_input_byte += num_bytes; dinfo->src->bytes_in_buffer -= num_bytes; } } /* Finished with decompression */ METHODDEF(void) term_source (j_decompress_ptr dinfo) { /* Again. Absolute laziness. Nothing to do here. Boring. */ } GLOBAL(void) jpeg_memory_src (j_decompress_ptr dinfo, unsigned char* buffer, size_t size) { struct jpeg_source_mgr* src; /* first call for this instance - need to setup */ if (dinfo->src == 0) { dinfo->src = (struct jpeg_source_mgr *) (*dinfo->mem->alloc_small) ((j_common_ptr) dinfo, JPOOL_PERMANENT, sizeof (struct jpeg_source_mgr)); } src = dinfo->src; src->next_input_byte = buffer; src->bytes_in_buffer = size; src->init_source = init_source; src->fill_input_buffer = fill_input_buffer; src->skip_input_data = skip_input_data; src->term_source = term_source; /* IJG recommend to use their function - as I don't know **** about how to do better, I follow this recommendation */ src->resync_to_restart = jpeg_resync_to_restart; } All I need to do is replace the jpeg_stdio_dest in my program with this code: int numBytes = 0; //size of jpeg after compression char * storage = new char[150000]; //storage buffer JOCTET *jpgbuff = (JOCTET*)storage; //JOCTET pointer to buffer jpeg_memory_dest(&cinfo,jpgbuff,150000,&numBytes); So I need some help to incorporate the above four lines into this function which now works but writes to a file instead of a memory: int write_jpeg_file( char *filename ) { struct jpeg_compress_struct cinfo; struct jpeg_error_mgr jerr; /* this is a pointer to one row of image data */ JSAMPROW row_pointer[1]; FILE *outfile = fopen( filename, "wb" ); if ( !outfile ) { printf("Error opening output jpeg file %s\n!", filename ); return -1; } cinfo.err = jpeg_std_error( &jerr ); jpeg_create_compress(&cinfo); jpeg_stdio_dest(&cinfo, outfile); /* Setting the parameters of the output file here */ cinfo.image_width = width; cinfo.image_height = height; cinfo.input_components = bytes_per_pixel; cinfo.in_color_space = color_space; /* default compression parameters, we shouldn't be worried about these */ jpeg_set_defaults( &cinfo ); /* Now do the compression .. */ jpeg_start_compress( &cinfo, TRUE ); /* like reading a file, this time write one row at a time */ while( cinfo.next_scanline < cinfo.image_height ) { row_pointer[0] = &raw_image[ cinfo.next_scanline * cinfo.image_width * cinfo.input_components]; jpeg_write_scanlines( &cinfo, row_pointer, 1 ); } /* similar to read file, clean up after we're done compressing */ jpeg_finish_compress( &cinfo ); jpeg_destroy_compress( &cinfo ); fclose( outfile ); /* success code is 1! */ return 1; } Anybody could help me out a bit with it? I've tried meddling with it but I am not sure how to do it. I I just replace this line: jpeg_stdio_dest(&cinfo, outfile); It's not going to work. There is more stuff that needs to be changed a bit in that function and I am being a little lost from all those pointers and memory management.

    Read the article

  • CUDA linking error - Visual Express 2008 - nvcc fatal due to (null) configuration file

    - by Josh
    Hi, I've been searching extensively for a possible solution to my error for the past 2 weeks. I have successfully installed the Cuda 64-bit compiler (tools) and SDK as well as the 64-bit version of Visual Studio Express 2008 and Windows 7 SDK with Framework 3.5. I'm using windows XP 64-bit. I have confirmed that VSE is able to compile in 64-bit as I have all of the 64-bit options available to me using the steps on the following website: (since Visual Express does not inherently include the 64-bit packages) http://jenshuebel.wordpress.com/2009/02/12/visual-c-2008-express-edition-and-64-bit-targets/ I have confirmed the 64-bit compile ability since the "x64" is available from the pull-down menu under "Tools-Options-VC++ Directories" and compiling in 64-bit does not result in the entire project being "skipped". I have included all the needed directories for 64-bit cuda tools, 64 SDK and Visual Express (\VC\bin\amd64). Here's the error message I receive when trying to compile in 64-bit: 1>------ Build started: Project: New, Configuration: Release x64 ------ 1>Compiling with CUDA Build Rule... 1>"C:\CUDA\bin64\nvcc.exe" -arch sm_10 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT " -maxrregcount=32 --compile -o "x64\Release\template.cu.obj" "c:\Documents and Settings\All Users\Application Data\NVIDIA Corporation\NVIDIA GPU Computing SDK\C\src\CUDA_Walkthrough_DeviceKernels\template.cu" 1>nvcc fatal : Visual Studio configuration file '(null)' could not be found for installation at 'C:/Program Files (x86)/Microsoft Visual Studio 9.0/VC/bin/../..' 1>Linking... 1>LINK : fatal error LNK1181: cannot open input file '.\x64\Release\template.cu.obj' 1>Build log was saved at "file://c:\Documents and Settings\Administrator\My Documents\Visual Studio 2008\Projects\New\New\x64\Release\BuildLog.htm" 1>New - 1 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Here's the simple code I'm trying to compile/run in 64-bit: #include <stdlib.h> #include <stdio.h> #include <string.h> #include <math.h> #include <cuda.h> void mypause () { printf ( "Press [Enter] to continue . . ." ); fflush ( stdout ); getchar(); } __global__ void VecAdd1_Kernel(float* A, float* B, float* C, int N) { int i = blockDim.x*blockIdx.x+threadIdx.x; if (i<N) C[i] = A[i] + B[i]; //result should be a 16x1 array of 250s } __global__ void VecAdd2_Kernel(float* B, float* C, int N) { int i = blockDim.x*blockIdx.x+threadIdx.x; if (i<N) C[i] = C[i] + B[i]; //result should be a 16x1 array of 400s } int main() { int N = 16; float A[16];float B[16]; size_t size = N*sizeof(float); for(int i=0; i<N; i++) { A[i] = 100.0; B[i] = 150.0; } // Allocate input vectors h_A and h_B in host memory float* h_A = (float*)malloc(size); float* h_B = (float*)malloc(size); float* h_C = (float*)malloc(size); //Initialize Input Vectors memset(h_A,0,size);memset(h_B,0,size); h_A = A;h_B = B; printf("SUM = %f\n",A[1]+B[1]); //simple check for initialization //Allocate vectors in device memory float* d_A; cudaMalloc((void**)&d_A,size); float* d_B; cudaMalloc((void**)&d_B,size); float* d_C; cudaMalloc((void**)&d_C,size); //Copy vectors from host memory to device memory cudaMemcpy(d_A,h_A,size,cudaMemcpyHostToDevice); cudaMemcpy(d_B,h_B,size,cudaMemcpyHostToDevice); //Invoke kernel int threadsPerBlock = 256; int blocksPerGrid = (N+threadsPerBlock-1)/threadsPerBlock; VecAdd1(blocksPerGrid, threadsPerBlock,d_A,d_B,d_C,N); VecAdd2(blocksPerGrid, threadsPerBlock,d_B,d_C,N); //Copy results from device memory to host memory //h_C contains the result in host memory cudaMemcpy(h_C,d_C,size,cudaMemcpyDeviceToHost); for(int i=0; i<N; i++) //output result from the kernel "VecAdd" { printf("%f ", h_C[i] ); printf("\n"); } printf("\n"); cudaFree(d_A); cudaFree(d_B); cudaFree(d_C); free(h_A); free(h_B); free(h_C); mypause(); return 0; }

    Read the article

  • Dell PowerEdge R720xd stuck in BIOS

    - by G_P
    I have a Dell PowerEdge R720xd that gets stuck in the BIOS when booting. It successfully gets past the "configuring memory" and "configuring iDRAC" screens, but once it shows the "CPLD version : 103" with the various management engine versions/patches, it just hangs. No errors messages are displayed. This started happening when we tried adding additional RAM to the machine. Since then, we tried re-seating the new memory which resulted in the same issue. Then, we took out all the new memory, and the problem persists. We have also tried pressing F2 to get into System Setup, but it just indicates "Entering System Setup" and hangs at the same point. Has anybody seen this issue before or have any ideas on what to try next? UPDATE After troubleshooting and trying to isolate the issue (stripping things down to a single CPU and single DIMM, same problem, swapping to the other CPU and a different DIMM, same problem), Dell support will be coming out to swap the system board.

    Read the article

  • Group Policy drive maps fail with Error Code: 0x80070043

    - by Topherhead
    I'm running a Server 2008 R2 domain with all Windows 7 x64 bit client machines. All drives are mapped using Group Policy. Which were previously on a NAS We just built a new, huge, fast server. So I'm in the process of migrating all the network drives from the NAS to the new fileserver(fs). The old drive maps were mapped using group policy so I just went in and updated to the new server and selected the "Replace" option. But the drives just plain do not map. I do an RSOP on my machine and the error for the drive map is: Result: Failure (Error Code: 0x80070043) The other odd thing, though it may or may not have anything to do with it, is that the winning GPO shown is shown with its SID instead of its name. The SID is correct though. Accessing the shares through Explorer works fine, and mapping them manually works fine. Any ideas? Thanks Chris

    Read the article

  • Testdisk won’t list files for an ext4 partition inside a LVM inside a LUKS partition

    - by user1598585
    I have accidentally deleted a file that I want to recover. The partition is an ext4 partition inside an LVM partition that is encrypted with dm-crypt/LUKS. The encrypted LUKS partition is: /dev/sda2 which contains a physical volume, with a single volume group, mapped to: /dev/mapper/system And the logical volume, the ext4 partition is mapped to: /dev/mapper/system-home A # testdisk /dev/mapper/system-home will notice it as an ext4 partition but tells me that the partition seems damaged when I try to list the files. If I # testdisk /dev/mapper/system it will detect all the partitions, but the same happens if I try to list their files. Am I doing something wrong or is it a known bug? I have searched but haven’t found any clue.

    Read the article

  • Remote Desktop closes with Fatal Error (Error Code: 5)

    - by Swinders
    We have one PC (Windows XP SP3) that we can not log onto using a Remote Desktop session. Logging on to the PC directly (sitting in front of it using the connected keyboard and monitor) work fine. From a second PC (tried a number of different ones but all Windows XP SP3) I run 'mstsc' and type in PC name to connect to. This shows the login box which we can enter the correct login details and click OK. Within a few second we get an error: Title: Fatal Error (Error Code:5) Error: Your Remote Desktop session is about to end. This computer might be low on virtual memory. Close your other programs, and then try connecting to the remote computer again. If the problem continues, contact your network administrator or technical support. None of the computers we are using are low on memory (2Gb+) and we let windows manage the virtual memory side of things. We do not see this with any other PC and do use Remote Desktop in meeting rooms to connect to user PCs with no problems.

    Read the article

  • Can you disable UNC paths in Windows?

    - by Evan
    We are trying to lock down a Terminal Server, and want to remove a commercial package's ability to accept UNC file paths, ie. paths in the app can then only be entered using the windows drive letters. Is there any way to do this in Windows? Can we disallow UNC paths for just the app? Can we disallow UNC paths for the entire Terminal Server session? The intention is to allow the application to only write to certain directories (as mapped in the Terminal Server session). The aim is to prevent the output of files to directories that the users have access to, but are not mapped in the Terminal Server session.

    Read the article

  • Scheduled task to map a network drive runs, but doesn't map the drive

    - by bikefixxer
    I have a task set up to run whenever the computer is logged onto that deletes all network folders and maps a network drive. Here is what is in the batch file: @echo off net use * /delete /y net use b: \\Server\Share /user:DOMAIN\Username password exit When the computer is restarted or logged off and back on, the task runs fine (according to the scheduled tasks window saying when it ran last) but the mapped drive doesn't show up. I'll open the command prompt and type "net use" and it simply says "There are no entries in the list". If I then right click on the task and run it, it works and the mapped drive shows up. I've checked the log and nothing shows up. I've tried adding a timer in the batch file so it waits 10 seconds (ping 1.1.1.1 -n 1 -w 10000nul) thinking that maybe the network wasn't connected, but that didn't work. What else can I try? Thanks!

    Read the article

  • OOM-Killer, jboss and kernel panic

    - by user32425
    Hi, We are running RedHat 3.4.6(x32) on VMWareEsx3.5(x64) with 6GB RAM. A few java processes(including jboss) are running in the background. The problem is that the java processes consume lots of memory, and sometimes they are killed by the OOM-killer. When OOM-killer is about to act, the free physical memory is very low 100MB-200MB, but the swap is not used (99% free). Sometimes this causes a kernel panic too. So why isn't the swap used? How to investigate this kernel panic? Is using 6GB memory on 32bit Redhat wise? Thanks

    Read the article

  • error: cannot fork() for status: Resource temporarily unavailable (git)

    - by Elnaz Shahmehr
    when I want to do something: add , remove, pull , push in github, I just have this error in my terminal Thanks in advance! selnaz:iOS-Tidinfo Lnaz$ git add . error: cannot fork() for status: Resource temporarily unavailable fatal: Could not run git status --porcelain fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed Edit: selnaz:iOS-Tidinfo Lnaz$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 709 virtual memory (kbytes, -v) unlimited Edit2 selnaz:iOS-Tidinfo Lnaz$ ps xfu | wc -l ps: illegal option -- f usage: ps [-AaCcEefhjlMmrSTvwXx] [-O fmt | -o fmt] [-G gid[,gid...]] [-u] [-p pid[,pid...]] [-t tty[,tty...]] [-U user[,user...]] ps [-L] 0

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >