Search Results

Search found 269 results on 11 pages for 'volatile'.

Page 6/11 | < Previous Page | 2 3 4 5 6 7 8 9 10 11  | Next Page >

  • ubuntu server slowly filling up

    - by Crash893
    We had our samba server (ubuntu 8.04 ltr) share fill up the other day but when I went to look at it I cant see any of the shares have to much on them we have 5 group shares and then each users has an individual share one users has 22gigs of stuff a few others have 10-20mb of stuff and everyone else is empty so maybe like 26gigs total I deleted a few files yesterday and freed up about 250mb of space today when i checked it it was completely full again and i deleted some older files and freed up about 170mb of stuff but i can watch it slowly creep down in free space. I keep running a df -h Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 241690180 229340500 169200 100% / varrun 257632 260 257372 1% /var/run varlock 257632 0 257632 0% /var/lock udev 257632 72 257560 1% /dev devshm 257632 52 257580 1% /dev/shm lrm 257632 40000 217632 16% /lib/modules/2.6.24-28-generic /volatile what can I do to try to hunt down whats taking up so much of my hdd? (im fairly new to unix in general so i apologize if this is not well explained)

    Read the article

  • ubuntu server slowly filling up

    - by Crash893
    We had our samba server (ubuntu 8.04 ltr) share fill up the other day but when I went to look at it I cant see any of the shares have to much on them we have 5 group shares and then each users has an individual share one users has 22gigs of stuff a few others have 10-20mb of stuff and everyone else is empty so maybe like 26gigs total I deleted a few files yesterday and freed up about 250mb of space today when i checked it it was completely full again and i deleted some older files and freed up about 170mb of stuff but i can watch it slowly creep down in free space. I keep running a df -h Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 241690180 229340500 169200 100% / varrun 257632 260 257372 1% /var/run varlock 257632 0 257632 0% /var/lock udev 257632 72 257560 1% /dev devshm 257632 52 257580 1% /dev/shm lrm 257632 40000 217632 16% /lib/modules/2.6.24-28-generic /volatile what can I do to try to hunt down whats taking up so much of my hdd? (im fairly new to unix in general so i apologize if this is not well explained)

    Read the article

  • NVRAM for journals on Linux?

    - by symcbean
    I've been thinking about ways of speeding up disk I/O, and one of the bottlenecks I keep coming back to is the journal. There's an obvious benefit to using an SSD for the journal - over and above just write caching unless of course I just disable the journal with the write cache (after all devicemapper doesn't seem to support barriers). In order to get the benefits from using a BB write cache on the controller, then I'd need to disable journalling - but then the OS should try to fsck the system after an outage. Of course if the OS knows what's in the batter-backed memory then it could use it as the journal - but that means it must be exposed as a block device and only be under the control of the operating system. However I've not been able to find a suitable low-cost device (no, write-levelling for Flash is not adequate for a journal, at least one which uses Smartmedia). While there's no end of flash devices, disk/array controllers with BB write caches, so far I've not found anything which just gives me non-volatile memory addressable as a block storage device.

    Read the article

  • Solaris mounting partitions

    - by Benco
    I'm trying to mount a partition in solaris 10... bash-3.00# mount /dev/dsk/c0t0d0s3 /data mount: /dev/dsk/c0t0d0s3 is already mounted or /data is busy As far as I know c0t0d0s3 isn't already mounted elsewhere, so what's really going on here? From /etc/mnttab : /dev/dsk/c1t0d0s0 / ufs rw,intr,largefiles,logging,xattr,onerror=panic,dev=7800001285811136 /devices /devices devfs dev=4840000 1285811125 ctfs /system/contract ctfs dev=48c0001 1285811125 proc /proc proc dev=4880000 1285811125 mnttab /etc/mnttab mntfs dev=4900001 1285811125 swap /etc/svc/volatile tmpfs xattr,dev=4940001 1285811125 objfs /system/object objfs dev=4980001 1285811125 sharefs /etc/dfs/sharetab sharefs dev=49c0001 1285811125 /usr/lib/libc/libc_hwcap1.so.1 /lib/libc.so.1 lofs dev=780000 1285811131 fd /dev/fd fd rw,dev=4b40001 1285811136 swap /tmp tmpfs xattr,dev=4940002 1285811137 swap /var/run tmpfs xattr,dev=4940003 1285811137 -hosts /net autofs nosuid,indirect,ignore,nobrowse,dev=4c00001 1285811148 auto_home /home autofs indirect,ignore,nobrowse,dev=4c00002 1285811148 cordb:vold(pid530) /vol nfs ignore,noquota,dev=4bc0001 1285811149 I suspect the problem is not related to the mount point, but rather the disk slice I'm trying to mount: bash-3.00# newfs -v /dev/dsk/c0t0d0s3 /dev/rdsk/c0t0d0s3: Device busy

    Read the article

  • Eliminating Windows 7 user tracking registry writes

    - by caffiend
    Windows 7 continues the practice of saving user actions in the registry. I'd like to disable this practice both to avoid reg-file fragmentation and SSD wear, as well as being uncomfortable with programs being able to quickly analyze my usage habits. Even with the "Turn off user tracking" policy enabled, there are at least two areas that still contain user tracking: HKCU\Software\Classes\Local Settings\MuiCache This key stores a cache of most-recently accessed strings, including most-recently ran exe descriptions. MKCU\Software\Classes\Local Settings\Software\Microsoft Windows\Shell\BagMRU This directory stores the most recently viewed folders along with timestamps. Are there additional policy settings/registry entries to disable these writes? If not, is it possible to make these entries Volatile? Would it be practical to create a temporary hive (eg, on ramdisk) and map it over this location?

    Read the article

  • Is putting swapfile & temp folder in ramdisk a good idea in Windows 7 64 bit with lots of ram?

    - by Tony_Henrich
    I want my Windows to run as fast as possible. If I have 12GB ram in Windows 7 64bit, quad core cpu, and all apps fit in memory, will the swap file ever be used for anything? The question is about if its a good idea to put the swap file in a ram disk. Would a ram disk help in any way or will Windows intelligently use all the available memory for all its work? I am also thinking of putting the temp folder in a ram disk. I know ram disk is volatile memory and I don't care about its content in it if it gets lost.

    Read the article

  • Venezuela's Highly Inflationary Economy Means Changes to Financial Statements

    - by Theresa Hickman
    This is a bit of an esoteric topic, but given the number of U.S. Companies (particularly oil companies) that operate and have subsidiaries in Venezuela, I think it is worthy of an honorable mention. As you may or may not know, Venezuela's currency has had some changes over the years. In 2008, the Venezuelan Bolivar became the Bolivar Fuerte which dropped three zeros. So Bs.10,000 became Bs.F.10 and all their bills and coins were changed to reflect this. Then on Jan. 8, 2010, the government devalued the currency by 100%. The conversion from VEF to USD dropped from 2.15 to 4.30. (I always wanted to visit Venezuela; I guess it's time to book my vacation). The SEC recently labeled Venezuela a highly inflationary economy. This means that US companies with investments/subsidiaries in Venezuela will need to apply highly inflationary accounting rules starting on Jan. 1, 2010. In addition, companies need to make more detailed disclosures when the Venezuelan reported balances differ from the actual US dollar denominated balances. In a nut shell, if you formerly used translation, then starting Jan 1 of this year, you must now use remeasurement (or temporal method) to restate your Venezuelan entity's financial statements. See ASC topic 830, Foreign Currency Matters, which states that "[t]he financial statements of a foreign entity in a highly inflationary economy shall be remeasured as if the functional currency were the reporting currency." For you non-accountants that I haven't bored and are still reading at this point, the reason why the SEC is doing this is to ensure financial statements are presented as accurately as possible. Hyperinflationary economies have volatile currencies, such as Venezuela (it's not every day a currency devalues 100% overnight) which can distort financial statements if the local currency (Venezuelan Bolivar Fuerte) is used as the functional currency. To make financial statements more accurate, the reporting currency of the U.S. parent (US dollars) should be used as the functional currency. FASB.orgactually has a nice write-up on this.

    Read the article

  • AT91SAM7X512's SPI peripheral gets disabled on write to SPI_TDR

    - by Dor
    My AT91SAM7X512's SPI peripheral gets disabled on the X time (X varies) that I write to SPI_TDR. As a result, the processor hangs on the while loop that checks the TDRE flag in SPI_SR. This while loop is located in the function SPI_Write() that belongs to the software package/library provided by ATMEL. The problem occurs arbitrarily - sometimes everything works OK and sometimes it fails on repeated attempts (attemp = downloading the same binary to the MCU and running the program). Configurations are (defined in the order of writing): SPI_MR: MSTR = 1 PS = 0 PCSDEC = 0 PCS = 0111 DLYBCS = 0 SPI_CSR[3]: CPOL = 0 NCPHA = 1 CSAAT = 0 BITS = 0000 SCBR = 20 DLYBS = 0 DLYBCT = 0 SPI_CR: SPIEN = 1 After setting the configurations, the code verifies that the SPI is enabled, by checking the SPIENS flag. I perform a transmission of bytes as follows: const short int dataSize = 5; // Filling array with random data unsigned char data[dataSize] = {0xA5, 0x34, 0x12, 0x00, 0xFF}; short int i = 0; volatile unsigned short dummyRead; SetCS3(); // NPCS3 == PIOA15 while(i-- < dataSize) { mySPI_Write(data[i]); while((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TXEMPTY) == 0); dummyRead = SPI_Read(); // SPI_Read() from Atmel's library } ClearCS3(); /**********************************/ void mySPI_Write(unsigned char data) { while ((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TXEMPTY) == 0); AT91C_BASE_SPI0->SPI_TDR = data; while ((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TDRE) == 0); // <-- This is where // the processor hangs, because that the SPI peripheral is disabled // (SPIENS equals 0), which makes TDRE equal to 0 forever. } Questions: What's causing the SPI peripheral to become disabled on the write to SPI_TDR? Should I un-comment the line in SPI_Write() that reads the SPI_RDR register? Means, the 4th line in the following code: (The 4th line is originally marked as a comment) void SPI_Write(AT91S_SPI *spi, unsigned int npcs, unsigned short data) { // Discard contents of RDR register //volatile unsigned int discard = spi->SPI_RDR; /* Send data */ while ((spi->SPI_SR & AT91C_SPI_TXEMPTY) == 0); spi->SPI_TDR = data | SPI_PCS(npcs); while ((spi->SPI_SR & AT91C_SPI_TDRE) == 0); } Is there something wrong with the code above that transmits 5 bytes of data? Please note: The NPCS line num. 3 is a GPIO line (means, in PIO mode), and is not controlled by the SPI controller. I'm controlling this line by myself in the code, by de/asserting the ChipSelect#3 (NPCS3) pin when needed. The reason that I'm doing so is because that problems occurred while trying to let the SPI controller to control this pin. I didn't reset the SPI peripheral twice, because that the errata tells to reset it twice only if I perform a reset - which I don't do. Quoting the errata: If a software reset (SWRST in the SPI Control Register) is performed, the SPI may not work properly (the clock is enabled before the chip select.) Problem Fix/Workaround The SPI Control Register field, SWRST (Software Reset) needs to be written twice to be cor- rectly set. I noticed that sometimes, if I put a delay before the write to the SPI_TDR register (in SPI_Write()), then the code works perfectly and the communications succeeds. Useful links: AT91SAM7X Series Preliminary.pdf ATMEL software package/library spi.c from Atmel's library spi.h from Atmel's library An example of initializing the SPI and performing a transfer of 5 bytes is highly appreciated and helpful.

    Read the article

  • Planning for the Recovery

    - by john.orourke(at)oracle.com
    As we plan for 2011, there are many positive signs in the global economy, but also some lingering issues. Planning no longer is about extrapolating past performance and adjusting for growth. It is now about constantly testing the temperature of the water, formulating scenarios, assessing risk and assigning probabilities.  So how does one plan for recovery and improve forecast accuracy in such a volatile environment?  Here are some suggestions from a recent article I wrote, which was published in the December Financial Planning & Analysis (FP&A) newsletter from the AFP (Association of Financial Professionals): Increase the frequency of forecasting Get more line managers involved in the planning and forecasting process Re-consider what's being measured - i.e. key financial and operational metrics Incorporate risk and probability into forecasts Reduce reliance on spreadsheets - leverage packaged EPM applications To learn more about these best practices, check out the FP&A section of the AFP website and register to receive the FP&A newsletter.  AFP recently launched a new topic area focused on the FP&A function and items of interest to this group of finance professionals.  In addition to the FP&A quarterly newsletter, AFP will be publishing articles, running webinars and will have an FP&A track in their annual conference, which is in Boston next November.  Brian Kalish, AFP's Finance Lead, is hoping this initiative creates a valuable networking and information-sharing resource for FP&A professionals. Here's a link to the FP&A page on the AFP web site:  http://www.afponline.org/pub/res/topics/topics_fpa.html If you register on the site you can access and subscribe to the FP&A newsletter and other resources. Best of luck in your planning for 2011 and beyond!   

    Read the article

  • Database - Designing an "Events" Table

    - by Alix Axel
    After reading the tips from this great Nettuts+ article I've come up with a table schema that would separate highly volatile data from other tables subjected to heavy reads and at the same time lower the number of tables needed in the whole database schema, however I'm not sure if this is a good idea since it doesn't follow the rules of normalization and I would like to hear your advice, here is the general idea: I've four types of users modeled in a Class Table Inheritance structure, in the main "user" table I store data common to all the users (id, username, password, several flags, ...) along with some TIMESTAMP fields (date_created, date_updated, date_activated, date_lastLogin, ...). To quote the tip #16 from the Nettuts+ article mentioned above: Example 2: You have a “last_login” field in your table. It updates every time a user logs in to the website. But every update on a table causes the query cache for that table to be flushed. You can put that field into another table to keep updates to your users table to a minimum. Now it gets even trickier, I need to keep track of some user statistics like how many unique times a user profile was seen how many unique times a ad from a specific type of user was clicked how many unique times a post from a specific type of user was seen and so on... In my fully normalized database this adds up to about 8 to 10 additional tables, it's not a lot but I would like to keep things simple if I could, so I've come up with the following "events" table: |------|----------------|----------------|--------------|-----------| | ID | TABLE | EVENT | DATE | IP | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190030 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | created | 201004190031 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | activated | 201004190234 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | approved | 201004190930 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | login | 201004191200 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | created | 201004191230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | impressed | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | blocked | 201004200319 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | deleted | 201004200320 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| Basically the ID refers to the primary key (id) field in the TABLE table, I believe the rest should be pretty straightforward. One thing that I've come to like in this design is that I can keep track of all the user logins instead of just the last one, and thus generate some interesting metrics with that data. Due to the nature of the events table I also thought of making some optimizations, such as: #9: Since there is only a finite number of tables and a finite (and predetermined) number of events, the TABLE and EVENTS columns could be setup as ENUMs instead of VARCHARs to save some space. #14: Store IPs as UNSIGNED INT with INET_ATON() instead of VARCHARs. Store DATEs as TIMESTAMPs instead of DATETIMEs. Use the ARCHIVE (or the CSV?) engine instead of InnoDB / MyISAM. Overall, each event would only consume 14 bytes which is okay for my traffic I guess. Pros: Ability to store more detailed data (such as logins). No need to design (and code for) almost a dozen additional tables (dates and statistics). Reduces a few columns per table and keeps volatile data separated. Cons: Non-relational (still not as bad as EAV): SELECT * FROM events WHERE id = 2 AND table = 'user' ORDER BY date DESC(); 6 bytes overhead per event (ID, TABLE and EVENT). I'm more inclined to go with this approach since the pros seem to far outweigh the cons, but I'm still a little bit reluctant.. Am I missing something? What are your thoughts on this? Thanks!

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Architecture : am I doing things right?

    - by Jeremy D
    I'm trying to use a '~classic' layered arch using .NET and Entity Framework. We are starting from a legacy database which is a little bit crappy: Inconsistent naming Unneeded views (view referencing other views, select * views etc...) Aggregated columns Potatoes and Carrots in the same table etc... So I ended with fully isolating my database structure from my domain model. To do so EF entities are hidden from presentation layer. The goal is to permit an easier database refactoring while lowering the impact of it on applications. I'm now facing a lot of challenges and I'm starting to ask myself if I'm doing things right. My Domain Model is highly volatile, it keeps evolving with apps as new fields needs are arising. Complexity of it keeps raising and class it contains start to get a lot of properties. Creating include strategy and reprojecting to EF is very tricky (my domain objects don't have any kind of lazy/eager loading relationship properties): DomainInclude<Domain.Model.Bar>.Include("Customers").Include("Customers.Friends") // To... IFooContext.Bars.Include(...).Include(...).Where(...) Some framework are raping the isolation levels (Devexpress Grids which needs either XPO or IQueryable for filtering and paging large data sets) I'm starting to ask myself if : the isolation of EF auto-generated entities is an unneeded cost. I should allow frameworks to hit IQueryable? Slow slope to hell? (it's really hard to isolate DevExpress framework, any successful experience?) the high volatility of my domain model is normal? Did you have similar difficulties? Any advice based on experience?

    Read the article

  • How to unit test synchronized code

    - by gillJ
    Hi, I am new to Java and junit. I have the following peice of code that I want to test. Would appreciate if you could send your ideas about what's the best way to go about testing it. Basically, the following code is about electing a leader form a Cluster. The leader holds a lock on the shared cache and services of the leader get resumed and disposed if it somehow looses the lock on the cache. How can i make sure that a leader/thread still holds the lock on the cache and that another thread cannot get its services resumed while the first is in execution? public interface ContinuousService { public void resume(); public void pause(); } public abstract class ClusterServiceManager { private volatile boolean leader = false; private volatile boolean electable = true; private List<ContinuousService> services; protected synchronized void onElected() { if (!leader) { for (ContinuousService service : services) { service.resume(); } leader = true; } } protected synchronized void onDeposed() { if (leader) { for (ContinuousService service : services) { service.pause(); } leader = false; } } public void setServices(List<ContinuousService> services) { this.services = services; } @ManagedAttribute public boolean isElectable() { return electable; } @ManagedAttribute public boolean isLeader() { return leader; } public class TangosolLeaderElector extends ClusterServiceManager implements Runnable { private static final Logger log = LoggerFactory.getLogger(TangosolLeaderElector.class); private String election; private long electionWaitTime= 5000L; private NamedCache cache; public void start() { log.info("Starting LeaderElector ({})",election); Thread t = new Thread(this, "LeaderElector ("+election+")"); t.setDaemon(true); t.start(); } public void run() { // Give the connection a chance to start itself up try { Thread.sleep(1000); } catch (InterruptedException e) {} boolean wasElectable = !isElectable(); while (true) { if (isElectable()) { if (!wasElectable) { log.info("Leadership requested on election: {}",election); wasElectable = isElectable(); } boolean elected = false; try { // Try and get the lock on the LeaderElectorCache for the current election if (!cache.lock(election, electionWaitTime)) { // We didn't get the lock. cycle round again. // This code to ensure we check the electable flag every now & then continue; } elected = true; log.info("Leadership taken on election: {}",election); onElected(); // Wait here until the services fail in some way. while (true) { try { Thread.sleep(electionWaitTime); } catch (InterruptedException e) {} if (!cache.lock(election, 0)) { log.warn("Cache lock no longer held for election: {}", election); break; } else if (!isElectable()) { log.warn("Node is no longer electable for election: {}", election); break; } // We're fine - loop round and go back to sleep. } } catch (Exception e) { if (log.isErrorEnabled()) { log.error("Leadership election " + election + " failed (try bfmq logs for details)", e); } } finally { if (elected) { cache.unlock(election); log.info("Leadership resigned on election: {}",election); onDeposed(); } // On deposition, do not try and get re-elected for at least the standard wait time. try { Thread.sleep(electionWaitTime); } catch (InterruptedException e) {} } } else { // Not electable - wait a bit and check again. if (wasElectable) { log.info("Leadership NOT requested on election ({}) - node not electable",election); wasElectable = isElectable(); } try { Thread.sleep(electionWaitTime); } catch (InterruptedException e) {} } } } public void setElection(String election) { this.election = election; } @ManagedAttribute public String getElection() { return election; } public void setNamedCache(NamedCache nc) { this.cache = nc; }

    Read the article

  • Problems with createImage(int width, int height)

    - by Jonathan
    I have the following code, which is run every 10ms as part of a game: private void gameRender() { if(dbImage == null) { //createImage() returns null if GraphicsEnvironment.isHeadless() //returns true. (java.awt.GraphicsEnvironment) dbImage = createImage(PWIDTH, PHEIGHT); if(dbImage == null) { System.out.println("dbImage is null"); //Error recieved return; } else dbg = dbImage.getGraphics(); } //clear the background dbg.setColor(Color.white); dbg.fillRect(0, 0, PWIDTH, PHEIGHT); //draw game elements... if(gameOver) { gameOverMessage(dbg); } } The problem is that it enters the if statement which checks for the Image being null, even after I attempt to define the image. I looked around, and it seems that createImage() will return null if GraphicsEnvironment.isHeadless() returns true. I don't understand exactly what the isHeadless() method's purpose is, but I thought it might have something to do with the compiler or IDE, so I tried on two, both of which get the same error (Eclipse, and BlueJ). Anyone have any idea what the source of the error is, and how I might fix it? Thanks in advance Jonathan ................................................................... EDIT: I am using java.awt.Component.createImage(int width, int height). The purpose of this method is to ensure the creation of, and edit an Image that will contain the view of the player of the game, that will later be drawn to the screen by means of a JPanel. Here is some more code if this helps at all: public class Sim2D extends JPanel implements Runnable { private static final int PWIDTH = 500; private static final int PHEIGHT = 400; private volatile boolean running = true; private volatile boolean gameOver = false; private Thread animator; //gameRender() private Graphics dbg; private Image dbImage = null; public Sim2D() { setBackground(Color.white); setPreferredSize(new Dimension(PWIDTH, PHEIGHT)); setFocusable(true); requestFocus(); //Sim2D now recieves key events readyForTermination(); addMouseListener( new MouseAdapter() { public void mousePressed(MouseEvent e) { testPress(e.getX(), e.getY()); } }); } //end of constructor private void testPress(int x, int y) { if(!gameOver) { gameOver = true; //end game at mousepress } } //end of testPress() private void readyForTermination() { addKeyListener( new KeyAdapter() { public void keyPressed(KeyEvent e) { int keyCode = e.getKeyCode(); if((keyCode == KeyEvent.VK_ESCAPE) || (keyCode == KeyEvent.VK_Q) || (keyCode == KeyEvent.VK_END) || ((keyCode == KeyEvent.VK_C) && e.isControlDown()) ) { running = false; //end process on above list of keypresses } } }); } //end of readyForTermination() public void addNotify() { super.addNotify(); //creates the peer startGame(); //start the thread } //end of addNotify() public void startGame() { if(animator == null || !running) { animator = new Thread(this); animator.start(); } } //end of startGame() //run method for world public void run() { while(running) { long beforeTime, timeDiff, sleepTime; beforeTime = System.nanoTime(); gameUpdate(); //updates objects in game (step event in game) gameRender(); //renders image paintScreen(); //paints rendered image to screen timeDiff = (System.nanoTime() - beforeTime) / 1000000; sleepTime = 10 - timeDiff; if(sleepTime <= 0) //if took longer than 10ms { sleepTime = 5; //sleep a bit anyways } try{ Thread.sleep(sleepTime); //sleep by allotted time (attempts to keep this loop to about 10ms) } catch(InterruptedException ex){} beforeTime = System.nanoTime(); } System.exit(0); } //end of run() private void gameRender() { if(dbImage == null) { dbImage = createImage(PWIDTH, PHEIGHT); if(dbImage == null) { System.out.println("dbImage is null"); return; } else dbg = dbImage.getGraphics(); } //clear the background dbg.setColor(Color.white); dbg.fillRect(0, 0, PWIDTH, PHEIGHT); //draw game elements... if(gameOver) { gameOverMessage(dbg); } } //end of gameRender() } //end of class Sim2D Hope this helps clear things up a bit, Jonathan

    Read the article

  • What is a Relational Database Management System (RDBMS)?

    A Relational Database Management System (RDBMS)  can also be called a traditional database that uses a Structured Query Language (SQL) to provide access to stored data while insuring the integrity of the data. The data is stored in a collection of tables that is defined by relationships between data items. In addition, data permitted to be joined in new relationships. Traditional databases primarily process data through transactions called transaction processing. Transaction processing is the methodology of grouping related business operations based predefined business events. An example of this can be seen when a person attempts to purchase an item from an online e-tailor. The business must execute specific operations for a related  business event. In this case, a business must store the following information: Customer Info, Order Info, Order Item Info, Customer Payment Data, Payment Results, and Current Order Status. Example: Pseudo SQL Operations needed for processing an online e-tailor sale. Insert Customer into Customers Insert New Order into Orders Insert Each New Order Item into OrderItems Insert Customer Payment Info into PaymentInfo Insert Payment Processing Result into PaymentDetails Update Customer for Current Order Status Common Relational Database Management System Microsoft SQL Server Microsoft Access Oracle MySQL DB2 It is important to note that no current RDBMS has fully implemented all of the Relational Principles. Common RDBMS Traits Volatile Data Supports Transaction Processing Optimized for Updates and Simple Queries 

    Read the article

  • How to show that the double-checked-lock pattern with Dictionary's TryGetValue is not threadsafe in

    - by Amir
    Recently I've seen some C# projects that use a double-checked-lock pattern on a Dictionary. Something like this: private static readonly object _lock = new object(); private static volatile IDictionary<string, object> _cache = new Dictionary<string, object>(); public static object Create(string key) { object val; if (!_cache.TryGetValue(key, out val)) { lock (_lock) { if (!_cache.TryGetValue(key, out val)) { val = new object(); // factory construction based on key here. _cache.Add(key, val); } } } return val; } This code is incorrect, since the Dictionary can be "growing" the collection in _cache.Add() while _cache.TryGetValue (outside the lock) is iterating over the collection. It might be extremely unlikely in many situations, but is still wrong. Is there a simple program to demonstrate that this code fails? Does it make sense to incorporate this into a unit test? And if so, how?

    Read the article

  • Multithreading and booleans

    - by Ikaso
    Hi Everyone, I have a class that contains a boolean field like this one: public class MyClass { private boolean boolVal; public boolean BoolVal { get { return boolVal; } set { boolVal = value; } } } The field can be read and written from many threads using the property. My question is if I should fence the getter and setter with a lock statement? Or should I simply use the volatile keyword and save the locking? Or should I totally ignore multithreading since getting and setting boolean values atomic? regards,

    Read the article

  • Limit Connections with semaphores

    - by Robert
    I'm trying to limit the number of connections my server will accept using semaphores, but when running, my code doesn't seem to make this restriction - am I using the semaphore correctly? eg. I have hardcoded the number of permit as 2, but I can connect an unlimited number of clients... public class EServer implements Runnable { private ServerSocket serverSocket; private int numberofConnections = 0; private Semaphore sem = new Semaphore(2); private volatile boolean keepProcessing = true; public EServer(int port) throws IOException { serverSocket = new ServerSocket(port); } @Override public void run() { while (keepProcessing) { try { sem.acquire(); Socket socket = serverSocket.accept(); process(socket, getNextConnectionNumber()); } catch (Exception e) { } finally { sem.release(); } } closeIgnoringException(serverSocket); } private synchronized int getNextConnectionNumber() { return ++numberofConnections; } // processing related methods }

    Read the article

  • .NET values lookup

    - by Maciej
    Hi, I have a feeling of missing something obvious. UDP receiver application. It holds a collection of valid UDP sender IPs - only guys with IP on that list will be considered. Since that list must be looked at on every packet and UDPs are so volatile, that operation must be maximum fast. Good choice is Dictionary but it is a key-value structure and what I actually need here is a dictionary-like (hash lookup) key only structure. Is there something like that? Small annoyance rather than a bug but still. I can still use Dictionary Thanks, M.

    Read the article

  • bridge methods explaination

    - by xdevel2000
    If I do an override of a clone method the compiler create a bridge method to guarantee a correct polymorphism: class Point { Point() { } protected Point clone() throws CloneNotSupportedException { return this; // not good only for example!!! } protected volatile Object clone() throws CloneNotSupportedException { return clone(); } } so when is invoked the clone method the bridge method is invoked and inside it is invoked the correct clone method. But my question is when into the bridge method is called return clone() how do the VM to say that it must invoke Point clone() and not itself again???

    Read the article

  • How to correctly stop thread which is using Control.Invoke

    - by codymanix
    I tried the following (pseudocode) but I always get a deadlock when Iam trying to stop my thread. The problem is that Join() waits for the thread to complete and a pending Invoke() operation is also waiting to complete. How can I solve this? Thread workerThread = new Thread(BackupThreadRunner); volatile bool cancel; // this is the thread worker routine void BackupThreadRunner() { while (!cancel) { DoStuff(); ReportProgress(); } } // main thread void ReportProgress() { if (InvokeRequired) { Invoke(ReportProgress); } UpdateStatusBarAndStuff(); } // main thread void DoCancel() { cancel=true; workerThread.Join(); }

    Read the article

  • Need help for this syntax: "#define LEDs (char *) 0x0003010"

    - by Noge
    I'm doing programming of a softcore processor, Nios II from Altera, below is the code in one of the tutorial, I manage to get the code working by testing it on the hardware (DE2 board), however, I could not understand the code. #define Switches (volatile char *) 0x0003000 #define LEDs (char *) 0x0003010 void main() { while (1) *LEDs = *Switches; } What I know about #define is that, it is either used to define a constant, or a macro, but why in the above code, there are casting like, (char *) 0x0003010, in #define? why the 2 constants, Switches and LEDs act like a variable instead of a constant? Thanks in advance !

    Read the article

  • Odd optimization problem under MSVC

    - by Goz
    I've seen this blog: http://igoro.com/archive/gallery-of-processor-cache-effects/ The "weirdness" in part 7 is what caught my interest. My first thought was "Thats just C# being weird". Its not I wrote the following C++ code. volatile int* p = (volatile int*)_aligned_malloc( sizeof( int ) * 8, 64 ); memset( (void*)p, 0, sizeof( int ) * 8 ); double dStart = t.GetTime(); for (int i = 0; i < 200000000; i++) { //p[0]++;p[1]++;p[2]++;p[3]++; // Option 1 //p[0]++;p[2]++;p[4]++;p[6]++; // Option 2 p[0]++;p[2]++; // Option 3 } double dTime = t.GetTime() - dStart; The timing I get on my 2.4 Ghz Core 2 Quad go as follows: Option 1 = ~8 cycles per loop. Option 2 = ~4 cycles per loop. Option 3 = ~6 cycles per loop. Now This is confusing. My reasoning behind the difference comes down to the cache write latency (3 cycles) on my chip and an assumption that the cache has a 128-bit write port (This is pure guess work on my part). On that basis in Option 1: It will increment p[0] (1 cycle) then increment p[2] (1 cycle) then it has to wait 1 cycle (for cache) then p[1] (1 cycle) then wait 1 cycle (for cache) then p[3] (1 cycle). Finally 2 cycles for increment and jump (Though its usually implemented as decrement and jump). This gives a total of 8 cycles. In Option 2: It can increment p[0] and p[4] in one cycle then increment p[2] and p[6] in another cycle. Then 2 cycles for subtract and jump. No waits needed on cache. Total 4 cycles. In option 3: It can increment p[0] then has to wait 2 cycles then increment p[2] then subtract and jump. The problem is if you set case 3 to increment p[0] and p[4] it STILL takes 6 cycles (which kinda blows my 128-bit read/write port out of the water). So ... can anyone tell me what the hell is going on here? Why DOES case 3 take longer? Also I'd love to know what I've got wrong in my thinking above, as i obviously have something wrong! Any ideas would be much appreciated! :) It'd also be interesting to see how GCC or any other compiler copes with it as well! Edit: Jerry Coffin's idea gave me some thoughts. I've done some more tests (on a different machine so forgive the change in timings) with and without nops and with different counts of nops case 2 - 0.46 00401ABD jne (401AB0h) 0 nops - 0.68 00401AB7 jne (401AB0h) 1 nop - 0.61 00401AB8 jne (401AB0h) 2 nops - 0.636 00401AB9 jne (401AB0h) 3 nops - 0.632 00401ABA jne (401AB0h) 4 nops - 0.66 00401ABB jne (401AB0h) 5 nops - 0.52 00401ABC jne (401AB0h) 6 nops - 0.46 00401ABD jne (401AB0h) 7 nops - 0.46 00401ABE jne (401AB0h) 8 nops - 0.46 00401ABF jne (401AB0h) 9 nops - 0.55 00401AC0 jne (401AB0h) I've included the jump statetements so you can see that the source and destination are in one cache line. You can also see that we start to get a difference when we are 13 bytes or more apart. Until we hit 16 ... then it all goes wrong. So Jerry isn't right (though his suggestion DOES help a bit), however something IS going on. I'm more and more intrigued to try and figure out what it is now. It does appear to be more some sort of memory alignment oddity rather than some sort of instruction throughput oddity. Anyone want to explain this for an inquisitive mind? :D Edit 3: Interjay has a point on the unrolling that blows the previous edit out of the water. With an unrolled loop the performance does not improve. You need to add a nop in to make the gap between jump source and destination the same as for my good nop count above. Performance still sucks. Its interesting that I need 6 nops to improve performance though. I wonder how many nops the processor can issue per cycle? If its 3 then that account for the cache write latency ... But, if thats it, why is the latency occurring? Curiouser and curiouser ...

    Read the article

  • Controlling read and write access width to memory mapped registers in C

    - by srking
    I'm using and x86 based core to manipulate a 32-bit memory mapped register. My hardware behaves correctly only if the CPU generates 32-bit wide reads and writes to this register. The register is aligned on a 32-bit address and is not addressable at byte granularity. What can I do to guarantee that my C (or C99) compiler will only generate full 32-bit wide reads and writes in all cases? For example, if I do a read-modify-write operation like this: volatile uint32_t* p_reg = 0xCAFE0000; *p_reg |= 0x01; I don't want the compiler to get smart about the fact that only the bottom byte changes and generate 8-bit wide read/writes. Since the machine code is often more dense for 8-bit operations on x86, I'm afraid of unwanted optimizations. Disabling optimizations in general is not an option.

    Read the article

  • Problem with increment in inline ARM assembly

    - by tech74
    Hi , i have the following bit of inline ARM assembly, it works in a debug build but crashes in a release build of iphone sdk 3.1. The problem is the add instructions where i am incrementing the address of the C variables output and x by 4 bytes, this is supposed to increment by the size of a float. I think when i increment at some such stage i am overwriting something, can anyone say which is the best way to handle this Thanks C code that the asm is replacing, sum,output and x are all floats for(int i = 0; i< count; i++) sum+= output[i]* (*x++) asm volatile( ".align 4 \n\t" "mov r4,%3 \n\t" "flds s0,[%0] \n\t" "0: \n\t" "flds s1,[%2] \n\t" //"add %3,%3,#4 \n\t" "flds s2,[%1] \n\t" //"add %2,%2,#4 \n\t" "subs r4,r4, #1 \n\t" "fmacs s0, s1, s2 \n\t" "bne 0b \n\t" "fsts s0,[%0] \n\t" : : "r" (&sum), "r" (output), "r" (x),"r" (count) : "r0","r4","cc", "memory", "s0","s1","s2" );

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11  | Next Page >