Search Results

Search found 30801 results on 1233 pages for 'hard link'.

Page 25/1233 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • how to check if a path is actual or symbolic link

    - by hits_lucky
    Hi, I am writing my own shell program. I am currently implementing the cd command using chdir. I want to implement the cd with the below options : -P Do not follow symbolic links -L Follow symbolic links (default) My query is that , when a given path is entered on the shell how to figure out if the path is a symbolic link or an absolute path progamatically? Thanks

    Read the article

  • Solaris 11 Update 1 - Link Aggregation

    - by Wesley Faria
    Solaris 11.1 No início desse mês em um evento mundial da Oracle chamado Oracle Open World foi lançada a nova release do Solaris 11. Ela chega cheia de novidades, são aproximadamente 300 novas funcionalidade em rede, segurança, administração e outros. Hoje vou falar de uma funcionalidade de rede muito interessante que é o Link Aggregation. O Solaris já suporta Link Aggregation desde Solaris 10 Update 1 porem no Solaris 11 Update 1 tivemos incrementos significantes. O Link Aggregation como o próprio nome diz, é a agregação de mais de uma inteface física de rede em uma interface lógica .Veja agumas funcionalidade do Link Aggregation: · Aumentar a largura da banda; · Imcrementar a segurança fazendo Failover e Failback; · Melhora a administração da rede; O Solaris 11.1 suporta 2(dois) tipos de Link Aggregation o Trunk aggregation e o Datalink Multipathing aggregation, ambos trabalham fazendo com que o pacote de rede seja distribuído entre as intefaces da agregação garantindo melhor utilização da rede.vamos ver um pouco melhor cada um deles. Trunk Aggregation O Trunk Aggregation tem como objetivo aumentar a largura de banda, seja para aplicações que possue um tráfego de rede alto seja para consolidação. Por exemplo temos um servidor que foi adquirido para comportar várias máquinas virtuais onde cada uma delas tem uma demanda e esse servidor possue 2(duas) placas de rede. Podemos então criar uma agregação entre essas 2(duas) placas de forma que o Solaris 11.1 vai enchergar as 2(duas) placas como se fosse 1(uma) fazendo com que a largura de banda duplique, veja na figura abaixo: A figura mostra uma agregação com 2(duas) placas físicas NIC 1 e NIC 2 conectadas no mesmo switch e 2(duas) interfaces virtuais VNIC A e VNIC B. Porem para que isso funcione temos que ter um switch com suporte a LACP ( Link Aggregation Control Protocol ). A função do LACP é fazer a aggregação na camada do switch pois se isso não for feito o pacote que sairá do servidor não poderá ser montado quando chegar no switch. Uma outra forma de configuração do Trunk Aggregation é o ponto-a-ponto onde ao invéz de se usar um switch, os 2 servidores são conectados diretamente. Nesse caso a agregação de um servidor irá falar diretamente com a agregação do outro garantindo uma proteção contra falhas e tambem uma largura de banda maior. Vejamos como configurar o Trunk Aggregation: 1 – Verificando quais intefaces disponíveis # dladm show-link 2 – Verificando interfaces # ipadm show-if 3 – Apagando o endereçamento das interfaces existentes # ipadm delete-ip <interface> 4 – Criando o Trunk aggregation # dladm create-aggr -L active -l <interface> -l <interface> aggr0 5 – Listando a agregação criada # dladm show-aggr Data Link Multipath Aggregation Como vimos anteriormente o Trunk aggregation é implementado apenas 1(um) switch que possua suporte a LACP portanto, temos um ponto único de falha que é o switch. Para solucionar esse problema no Solaris 10 utilizavamos o IPMP ( IP Multipathing ) que é a combinação de 2(duas) agregações em um mesmo link ou seja, outro camada de virtualização. Agora com o Solaris 11 Update 1 isso não é mais necessário, voce pode ter uma agregação de 2(duas) interfaces físicas e cada uma conectada a 1(um) swtich diferente, veja a figura abaixo: Temos aqui uma agregação chamada aggr contendo 4(quatro) interfaces físicas sendo que as interfaces NIC 1 e NIC 2 estão conectadas em um Switch e as intefaces NIC 3 e NIC 4 estão conectadas em outro Swicth. Além disso foram criadas mais 4(quatro) interfaces virtuais vnic A, vnic B, vnic C e vnic D que podem ser destinadas a diferentes aplicações/zones. Com isso garantimos alta disponibilidade em todas a camadas pois podemos ter falhas tanto em switches, links como em interfaces de rede físicas. Para configurar siga os mesmo passos da configuração do Trunk Aggregation até o passo 3 depois faça o seguinte: 4 – Criando o Trunk aggregation # dladm create-aggr -m haonly -l <interface> -l <interface> aggr0 5 – Listando a agregação criada # dladm show-aggr Depois de configurado seja no modo Trunk aggregation ou no modo Data Link Multipathing aggregation pode ser feito a troca de um modo para o outro, pode adcionar e remover interfaces físicas ou vituais. Bem pessoal, era isso que eu tinha para mostar sobre a nova funcionalidade do Link Aggregation do Solaris 11 Update 1 espero que tenham gostado, até uma próxima novidade.

    Read the article

  • put link on table row or any other way to put link

    - by air
    i have one php code for button creation for($i=1;$i<=$n;$i++) { $row=mysql_fetch_array($result); if($row['btn_color']==1) $btbg="side-button5.png"; if($row['btn_color']==2) $btbg="side-button6.png"; if($row['btn_color']==3) $btbg="side-button7.png"; if($row['btn_color']==4) ?> <br> <table width="200" height="50" border="0" cellpadding="0" cellspacing="0"> <tr> <td background="images/<?php echo $btbg ; ?>" style="background-repeat:no-repeat"><table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr> <td height="66"> <div align="center" class="buttonside"> <p> <a class="buttonside" href="vpa.php?pgid=<?php echo $row['page_id']; ?>"> <?php echo $row['btn_text']?></p> </a> </div> </td> </tr> </table> </td> </tr> </table> <?php } ?> this code is working fine but the link is on text, i want to put link on full button(background) Thanks

    Read the article

  • Get the full URI from the href property of a link

    - by Savageman
    Hello SO, I would like to have a confirmation on some point. My goal is to always get the same string (which is the URI in my case) while reading the href property from a link. Example: <a href="test.htm" /> with base_url = http://domain.name/ <a href="../test.htm" /> with base_url = http://domain.name/domain/ <a href="http://domain.name/test.htm" /> with base_url = any folder from http://domain.name/ I need to get http://domain.name/test.htm from the 3 situations above (or any other identical string). After some tests, it appears that my_a_dom_node.href always return the full-qualified URI, including the http://domaine.name, which should be okay for what I want. jQuery has a different behaviour and $(my_a_dom_node).attr('href') returns the content (text) that appears inside the HTML. So my trick is to use $(my_a_dom_node).get(0).href to get the full URI. The question is: can I rely on this?

    Read the article

  • Trying to create link with NSTextField

    - by Chris
    I'm using this category (is that right?) http://www.nightproductions.net/references/dsclickableurltextfield_reference.html#setAttributedStringValue to implement clickable textfields. I've imported the header file in my controller class and set it's attributed string value like this NSAttributedString* str = [[NSAttributedString alloc] initWithString:@"http://www.yahoo.com"]; [url setAttributedStringValue:(NSAttributedString *)str]; [str release]; The text field is not selectable and not editable. The text field value is set but it's not clickable and it's not a link. Thanks in advance.

    Read the article

  • Strange margin-behaviour with Sifr spans in link (screens)

    - by knalstaaf
    I'm making a menu and it's supposed to look like picture nr 1 on this link. However, at this moment, it looks like picture nr 2. There's no logic in this behaviour, since all three elements have the same css-attributes. Moreover, let's change the word "Reiki" to "Psychotherapie" and see what happens... (picture nr 3) For some reason the word "Reiki" ignores certain attributes. This is not a problem when Sifr is turned off. I guess I'll need some extra CSS to solve this (messing with paddings, heights or margins on that specific element doesn't give any result. It just won't budge). Unless someone knows a more elegant solution?

    Read the article

  • On a failing hard drive, I am able to view data but unable to copy it - why?

    - by Tom
    I have a 2.5" external hard drive that is failing. It's not making the expected 'clicking' noise that most hard drives and I am able to view the data, but I am unable to actually retrieve the data. I attempted to use SpinRite in order to access the data on the drive, but it didn't like the external drive. When I view the drive's property page, the drive shows that it's used space is at 100% and that it has 0 bytes available; however, the progress indicator under the drive icon in Windows Explorer shows that it's roughly 50% full (which is correct). When I attempt to run Windows' "Error Checking" tool and attempt to "scan for an attempt recovery of bad sectors," the tool begins to run then immediately closes with no error message. I am able to browse the contents of the drive using Windows Explorer. When I begin to try copying any given single file, the copy process begins, an indicator starts, and then the copy fails with no real error message. The Disk Management page in Computer Management under Control Panel also shows this drive has being 'Healthy.' I dropped the drive off at a data recovery store and they said that "The data seems to be intact, but an internal failure is preventing any information from being retrieved." They offered to provide me references to a data recovery specialist. I've also attempted to run CHKDSK on the drive (with and without arguments) but it returns the following error: The type of the filesystem is RAW. CHKDSK is not available for RAW drives. Before going the route of more expensive data recovery, I'm wondering if these symptoms sound familiar to anyone? Other questions... I'm willing to continue trying tools such as TestDisk and/or PhotoRec (as the majority of the data that I'd like to salvage are photos) but how long I should expect either tool to run given approximately 400GB of data? I'm also comfortable using Linux so I welcome any suggestions for utilities or tools and strategies with which you've had success.

    Read the article

  • Hard drive not correctly recognized on a new Windows 7 installation, but works correctly on Windows XP

    - by david
    I'm having problems configuring a hard disk in a brand new, clean Windows 7 installation. System specs: Hard disk: WD VelociRaptor WD6000HLHX (600 GB, 10000 RPM) Motherboard: Gigabyte Z77X-UD3H BIOS SATA mode set to AHCI (not RAID), with disk connected to SATA0 (6 Gb/s port). Windows 7 Enterprise SP1 64-bit The disk is recognized by the BIOS and is correctly identified, with the name and size correctly reported. Windows recognizes the disk itself and reports the device is functioning correctly, but it doesn't appear in Explorer. Disk Management shows the drive, but incorrectly states that it is uninitialized and has no partitions. If I try to initialize the drive, I get an error saying that "the system cannot find the file specified" (what file?). Before connecting the drive to the new machine, I partitioned and formatted it under Windows XP SP2, creating 2 partitions (MBR, not GPT) and copying over a boatload of data. However, none of this data appears under Windows 7. If I put the disk back into the Windows XP machine, I can access the disk and all of its data. Is it possible to get Windows 7 to correctly recognize the disk without having to erase it and start over? If so, how do I do so? I checked this question, which seems to cover the same issue, but it didn't help.

    Read the article

  • How can I write directly to my Zune HD hard drive?

    - by iamgoat
    When syncing photos to the Zune HD it resizes them down to a much lower resolution which means I cannot load a high res picture on it (comic book) and zoom in to read it. This defeats the whole purpose of having a zoom feature. There is a registry hack you can make to get the Zune to display under My Computer. Then if you killed the zune process while it's syncing you'd be able to access it like a hard drive and copy files to it. It seems like the more recent firmware and/or Zune software version now prevents this. How can I treat it like an HDD and copy files to it? I simply want to take my original pictures folder and copy it over the low resolution versions the Zune software resized it to. An alternative option would be to remove the hard drive from it and see if I can connect it to a computer directly, but I just got this and don't want to disassemble it yet. Note to Microsoft: Why do you allow me to set the encoding quality of music, but not photos?

    Read the article

  • How can I wipe my iPod classic and fix any bad sectors on the hard drive without killing it?

    - by Sam Meldrum
    My iPod never finishes syncing and only syncs audio, not pictures or video. Any ideas as to how I can fix it? My iPod classic 160GB worked well for a couple of years. I used to sync a lot of photos at full resolution to it, but this recently stopped working after I moved to Windows 7. iTunes is on latest version - 9.1.1.12 iPod software is up to date - 1.1.2 Windows 7 is fully up to date and patched The symptoms are that the iPod will start to sync, all audio (music and podcasts will sync successfully) but the syncing will then just appear to continue - itunes message: Syncing iPod. Do not Disconnect. This sync never completes - I have left it trying for days. I have tried resetting the iPod using the Restore button, whereupon it restarts sync from default options and again will sync audio, but nothing else. I suspect that something has gone wrong on the hard-drive - either a bad sector or some corrupt data. Is there a process I can go through to fix this? E.g. SpinRite or a format? If so how do I go about formatting an iPod and will it be recognised as an iPod after format and work as normal? Any advice on what to try next much appreciated? Update I have eliminated problems with the files, PC or iTunes as they sync fine to other iPods. I have also eliminated the cable by trying different cables which work with other iPods. What I'd really like to know is if there is any way to more fundamentally wipe the iPod safely, attempt to repair any bad sectors on the hard drive and then start from scratch. Anyone ever managed this?

    Read the article

  • Can a OS be copied from one hard drive to another and still boot?

    - by AlexMorley-Finch
    Background My computer gets stuck on the make and model screen after the BIOS screen, aka the Toshiba screen. After some research I've realized that the problem is the hard drive. I'm using an old 250gb model that USED to be used for backup purposes, however I loaded windows 7 ultimate onto it This hard drive has trouble getting up to full RPM therefore cannot boot correctly until its warmed up. meaning that my pc needs to be restarted several times before it boots (once it took my 13 reboots to get my pc on!) From my research its either that, or lack of power supply, and I've tried multiple PSUs. Question I have my OS and all my files on this 250gb HDD... If I were to literally open the explorer, and copy EVERYTHING (including hidden files obviously) from this 250gb, to a spare 500gb I've got knocking about... Will it boot if I just copy everything? I cannot be bothered to load another OS onto my PC so if there is a way I can just copy the existing one over from one HDD to another and have it boot normally. This would be epic! I've heard about HDD cloning software. But before I purchase and/or download this software, I need to know if i can just copy the OS over through the windows explorer

    Read the article

  • Windows 7 does not detect my new hard drive?

    - by jasondavis
    I just built a really nice new PC. Some specs... Intel i7-930 CPU ASRock Extreme X58 motherboard with sata 3 and USB 3.0 12gb of G-Skill DDR3 RAM 80gb Intel G2 Solid state drive for Windows 7 and other programs to run on Windows 7 Pro 64bit OS 2 1gb grapghic cards for 4 monitor support Thats the main components. Well today my new 1tb western digital hard drive came which I plan to use for data to preserve the life of my SSD (hopefully). I hooked up it's sata power input and then hooked up it's sata data cable to a sata 2 port on my motherboard, I boot windows 7 and go into my computer and the drive is not showing up with my other drives. I then re-boot again and check again, no luck. I then shut down the PC and open the case back up, I then check my connections and they all look good. I then boot up and I can see the new HD is on and spinning. I then go into my BIOS settings to see if it registers there and it DOES! It shows I have a WD 1tb hard drvie on sata 2 port 6. So I am at a loss of why it is not showing up as an option in windows? Windows acts as if the drive is not there. Please help

    Read the article

  • Connect USB hard drive to wireless router on RJ45 port? Possible?

    - by lawphotog
    just a quick story behind. I was trying to set up wireless networked hard drive at home. My wireless router doesn't take USB. I am considering few options. First i was considering to get something like WD My Cloud. My router is an old one provided by service provider. It only has 10/100 Ethernet. WD My Cloud has Gigabit interface. So unless i changed a new router, data transfer will be slow. So upgrading the router is a must if i want fast transfer speed. Plus I already own an external hard drive with USB 3.0 interface. So if I get a router like Netgear D6300, i can get a decent speed wireless shared drive at home. And i can use my existing HDD instead of WD My Cloud. But the router isn't cheap so I am saving up for that. In the meantime I found out the existence of USB to RJ45 adaptor. I read the reviews and some say it works for them and for some don't. They didn't really say what they were trying to do so I'm confused. So if i bought an adaptor like this, can i connect my existing HDD (USB) with my existing router (RJ45) and use it as a shared drive for data transfer? I know it will be slow as the adaptor will only have USB 2.0 and 10/100 for Ethernet. But it's fine as it's for temporary until i got my new router.

    Read the article

  • Hard Drive problem: is it the SATA controller or the HDD itself?

    - by Drooling_Sheep
    I have a Samsung 1.5TB hard drive hooked up to an ECS H55H-I mini-ITX motherboard. I have XBMC 10 (modified Ubuntu 10.04) installed for use as an HTPC. The hard drive encounters occasional errors during normal use which cause it to be remounted read-only. I have updated the BIOS on the motherboard, changed the SATA cable and moved it to different ports on the motherboard, installed and re-installed the OS (including different versions of XBMC and generic ubuntu), all to no avail. I recently ran tests both with badblocks -sv and smartctl -t long. Both reported no errors. This makes me think the motherboard or SATA controller is probably the issue. Does anyone know of any further tests I can do to help narrow this down? The processor is a Core i3. I forget the model number but it's one of the 32nm ones with on-package graphics. There's no discrete video card or optical drive. The power supply is a 150W Rosewill (pretty sure) that came with the case.

    Read the article

  • Xcode Link Frameworks "Relative to Current SDK" Doesn't Work When Mixing Mac Framework and iPhone St

    - by bl4th3rsk1t3
    I have a framework of code I maintain. It's got mac and iphone objective-c code. And some of it is shared. I'm not having any problems with code. It's a problem with Xcode. Let's just call my framework "AwesomeKit" for this problem. The first thing I did was create an xcode Framework project called "AwesomeKit". Add source files to it, link against the common mac frameworks: foundation, cocoa, carbon, etc. It compiles fine. Then, add a new "static library" target, let's call it "AwesomeKit-iPhone" and set the base SDK in the build settings to iphone device 3.1.3. The problem comes when I try to add "Existing Frameworks" to the AweseomKit-iPhone target. -First change the current build target to AwesomeKit-iPhone. -Right click on any group and select "Add Existing Frameworks..." -Choose UIKit.framework UIKit will immediately be highlighted red, as if it's missing. It is indeed missing because Xcode uses the "Relative SDK" setting from the "Mac OS 10.6" SDK. When it should be using it relative to the current target's base sdk iphone device 3.1.3. What the heck? Has anyone experienced this? This is really annoying.

    Read the article

  • jQuery menu active link

    - by antosha
    Hello, I am trying to make a jquery menu that when I click on one of the links (without reloading the page), it changes its class to "active" and removes this class when I click on another link. here is my code : <script type="text/javascript"> $(document).ready(function() { $(".buttons").children().("a").click(function() { $(".buttons").children().toggleClass("selected").siblings().removeClass("selected"); }); }); </script> <ul class="buttons"> <li><a class="button" href="#">Link1</a></li> <li><a class="button" href="#">Link2</a></li> <li><a class="button" href="#">Link3</a></li> <li><a class="button" href="#">Link4</a></li> </ul> Can someone tell me why my code is not working and how to fix it? Thanks :)

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Flash caroussel xml parse html link

    - by Marvin
    Hello I am trying to modify a carousel script I have in flash. Its normal function is making some icons rotate and when clicked they zoom in, fade all others and display a little text. On that text I would like to have a link like a "read more". If I use CDATA it wont display a thing, if I use alt char like &#60;a href=&#34;www.google.com&#34;&#62; Read more + &#60;/a&#62; It just displays the text as: <a href="www.google.com"> Read more + </a>. The flash dynamic text box wont render it as html. I dont enough as2 to figure out how to add this. My code: var xml:XML = new XML(); xml.ignoreWhite = true; //definições do xml xml.onLoad = function() { var nodes = this.firstChild.childNodes; numOfItems = nodes.length; for(var i=0;i<numOfItems;i++) { var t = home.attachMovie("item","item"+i,i+1); t.angle = i * ((Math.PI*2)/numOfItems); t.onEnterFrame = mover; t.toolText = nodes[i].attributes.tooltip; t.content = nodes[i].attributes.content; t.icon.inner.loadMovie(nodes[i].attributes.image); t.r.inner.loadMovie(nodes[i].attributes.image); t.icon.onRollOver = over; t.icon.onRollOut = out; t.icon.onRelease = released; } } And the xml: <?xml version="1.0" encoding="UTF-8"?> <icons> <icon image="images/product.swf" tooltip="Product" content="Hello this is some random text &#60;a href=&#34;www.google.com&#34;&#62; Read More + &#60;/a&#62; "/> </icons> Any suggestions? Thanks.

    Read the article

  • How do you determine using stat() whether a file is a symbolic link?

    - by hora
    I basically have to write a clone of the UNIX ls command for a class, and I've got almost everything working. One thing I can't seem to figure out how to do is check whether a file is a symbolic link or not. From the man page for stat(), I see that there is a mode_t value defined, S_IFLNK. This is how I'm trying to check whether a file is a sym-link, with no luck (note, stbuf is the buffer that stat() returned the inode data into): switch(stbuf.st_mode & S_IFMT){ case S_IFLNK: printf("this is a link\n"); break; case S_IFREG: printf("this is not a link\n"); break; } My code ALWAYS prints this is not a link even if it is, and I know for a fact that the said file is a symbolic link since the actual ls command says so, plus I created the sym-link... Can anyone spot what I may be doing wrong? Thanks for the help!

    Read the article

  • Not sure how to link json 100% in php

    - by ronhdoge
    Im trying to create an rss feed that my droid app reads but i have some holes that i can figure how to fix the rss link page is http://www.mandarich.com/mandarichServer/mlb/indexbaseball.php when reading the rss i can see where the icon is missing on some and cant figure out why and cant figure saint louis at all. and the code i have for the php is as follows: <?php $teams["boston"] = "bostonredsox.gif"; $teams["nyyankees"] = "newyorkyankes.gif"; $teams["baltimore"] = "baltimoreorioles.gif"; $teams["tampa"] = "tampabayrays.gif"; $teams["toronto"] = "torontobluejays.gif"; $teams["atlanta"] = "atlantabraves.gif"; $teams["florida"] = "floridamarlins.gif"; $teams["nymets"] = "newyorkmets.gif"; $teams["philadelphia"] = "philadelphiaphillies.gif"; $teams["washington"] = "washingtonnationals.gif"; $teams["chicagosox"] = "chicagowhitesox.gif"; $teams["cleveland"] = "clevelandindians.gif"; $teams["detroit"] = "detroittigers.gif"; $teams["kansas"] = "kansascityroyals.gif"; $teams["minnesota"] = "minnesotatwins.gif"; $teams["chicagocubs"] = "chicagocubs.gif"; $teams["cincinnati"] = "cinncinatireds.gif"; $teams["houston"] = "houstonastros.gif"; $teams["milwaukee"] = "milwaukeebrewers.gif"; $teams["pittsburgh"] = "pitsburghpirates.gif"; $teams["st.louis"] = "stlouiscardinals.gif"; $teams["laangels"] = "losangelesangels.gif"; $teams["oakland"] = "oaklandathletics.gif"; $teams["seattle"] = "seattlemariners.gif"; $teams["texas"] = "texasrangers.gif"; $teams["arizona"] = "arizonadiamondbacks.gif"; $teams["colorado"] = "coloradorockies.gif"; $teams["ladodgers"] = "losangelesdodgers.gif"; $teams["sandiego"] = "sandiegopadres.gif"; $teams["sanfrancisco"] = "sanfranciscogiants.gif"; $abbr["arizona"] = "ARI"; $abbr["oakland"] = "OAK"; $abbr["baltimore"] = "BAL"; $abbr["tampa"] = "TAM"; $abbr["boston"] = "BOS"; $abbr["nyyankees"] = "NYY"; $abbr["texas"] = "TEX"; $abbr["toronto"] = "TOR"; $abbr["laangels"] = "LAA"; $abbr["atlanta"] = "ALT"; $abbr["colorado"] = "COL"; $abbr["philadelphia"] = "PHI"; $abbr["florida"] = "FLA"; $abbr["milwaukee"] = "MIL"; $abbr["washington"] = "WAS"; $abbr["chicagosox"] = "CHW"; $abbr["cleveland"] = "CLE"; $abbr["detroit"] = "DET"; $abbr["seattle"] = "SEA"; $abbr["sanfrancisco"] = "SFO"; $abbr["st.louis"] = "STL"; $abbr["chicagocubs"] = "CHC"; $abbr["houston"] = "HOU"; $abbr["nymets"] = "NYM"; $abbr["cincinnati"] = "CIN"; $abbr["sandiego"] = "SDG"; $abbr["ladodgers"] = "LAD"; $abbr["pittsburgh"] = "PIT"; $abbr["minnesota"] = "MIN"; $abbr["kansas"] = "KAN"; ?

    Read the article

  • Why are there hard faults when my RAM is not 100% used?

    - by Vilx-
    I've got 2GB of RAM and the resource monitor shows that it's only used about 75%. However there are some apps (NetBeans, Visual Studio) that every once in a while start making a lot of hard faults (up to and over 2000/min), thus predictably slowing down to a crawl. How is this so? The memory usage during these "fits" doesn't change. Perhaps it also includes memory mapped files or something?

    Read the article

  • How to upgrade the hard drive in a MacBook Pro?

    - by John McC
    I have a MacBook Pro with Snow Leopard that I want to upgrade to a new larger hard drive. I also have a current Time Machine backup on an external USB drive and an external SATA case I that I can put a 2.5" drive in. What's the best procedure for transferring the existing installation to the new drive?

    Read the article

  • Hard Drives: Always on or spin up/down as needed?

    - by Terminal Frost
    The specific application is a NAS that hosts media content and is frequently accessed during the day. My NAS was probably cycling on and off around 5 times a day so I decided to not allow it to spin down. I'm thinking this will be better for the drive, but I am not sure. I am wondering, however, if there is any concrete information out there as to what causes more detrimental wear on a hard disk: Spinning 24/7 or cycling on and off as needed?

    Read the article

  • Seeking (somewhat) better explanations about supporting > 2.1 TB hard drives.

    - by irrational John
    Today while Googling about I stumbled across posts claiming that Seagate plans to ship a 3TB drive sometime later in 2010. Unfortunately, the stuff I looked at all seemed to contain tidbits of info which I didn't think fit together properly. (I would link to some examples, but I'm only allowed 1 link per post at the moment). Now I really don't have any "need" to better understand the underlying tedious details of this. I am just curious. And confused. So ... some questions I'm hoping someone better informed than I might answer. The talk about a potential addressing problem in both the hardware and the software confused me. The assertion is that something called something called Long LBA addressing (LLBA) is needed in the Command Descriptor Block as a way to get around the current limits to access a hard drive bigger than ~2.1 (or ~2.2?) TB. OK, fine. But I thought the last time this problem came up it was solved by extending the length of the LBA field from 28 to 48 bits. (Remember this website? www.48bitlba.com) A 6 byte LBA is clearly large enough, so what's up with this LLBA talk. I thought this was all fixed back by Win XP SP2, if not sooner? And certainly all the hardware should be up to the task, shouldn't it? The real problem as I understand it with drives much bigger than 2 TB are the 4 byte LBA fields in the Master Boot Record (MBR) used to partition just about all hard drives at the moment. The most likely solution is to migrate to Intel's GUID Partition Table (GPT). A GPT uses 8 byte fields for the LBA. What I don't understand in this context is what is the problem with booting say Windows from a 3TB drive that uses a GPT. Granted, the current PC BIOS wouldn't know how to recognize or work with a GPT. But every GPT comes with a so-called "Safety" or "Guarding" MBR in sector 0.Apple already uses a hybrid version of the MBR to allow them to boot Windows on their Intel Macs (aka Boot Camp). Couldn't something similar be done to allow the PC BIOS to recognize and boot from a partition in, say, the first 1 GB of a 3GB or larger drive? I've got more questions such as where do 4K sectors fit into all of this. But it's probably time I just shut up and posted this. ;-) -irrational john

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >