Search Results

Search found 9044 results on 362 pages for 'bad sector'.

Page 58/362 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Increase samba space on open suse 12.1

    - by Kapil Sharma
    I know linux basics but not an expert. IT guy left the job here and there is some time before new hire. So sorry if question is very basic. We have local testing server based on Open SUSE 12.1, which also act as shared drive between dev/mgmt team here and using Samba for that. Now we are running out of space on samba, even though server's 2*1TB harddisk is nearly 90% free. My question is, what is limiting Samba and how can I increase its limit? We need around at least 500 GB as shared drive but currently its just 25 GB. I don't need step by step answer, just a link to any helpful article would be sufficient. Probably I'm putting wrong keywords in google so not getting any helpful link. EDIT: Output of commands in the first comment. All commands were run as root user df -h (getting error with df -ht) Filesystem Size Used Avail Use% Mounted on rootfs 30G 5.1G 23G 19% / devtmpfs 2.0G 36K 2.0G 1% /dev tmpfs 2.0G 1.1M 2.0G 1% /dev/shm tmpfs 2.0G 676K 2.0G 1% /run /dev/sda2 30G 5.1G 23G 19% / tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup tmpfs 2.0G 676K 2.0G 1% /var/run tmpfs 2.0G 0 2.0G 0% /media tmpfs 2.0G 676K 2.0G 1% /var/lock /dev/sda3 36G 31G 3.3G 91% /home fdisk -l /dev/[hmsv]d* Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2d4a2d49 Device Boot Start End Blocks Id System /dev/sda1 2048 16771071 8384512 82 Linux swap / Solaris /dev/sda2 * 16771072 79681535 31455232 83 Linux /dev/sda3 79681536 156301311 38309888 83 Linux Disk /dev/sda1: 8585 MB, 8585740288 bytes 255 heads, 63 sectors/track, 1043 cylinders, total 16769024 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda1 doesn't contain a valid partition table Disk /dev/sda2: 32.2 GB, 32210157568 bytes 255 heads, 63 sectors/track, 3915 cylinders, total 62910464 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System Disk /dev/sda3: 39.2 GB, 39229325312 bytes 255 heads, 63 sectors/track, 4769 cylinders, total 76619776 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda3 doesn't contain a valid partition table vgs No volume groups found lvs No volume groups found output of vi /etc/samba/smb.conf # smb.conf is the main Samba configuration file. You find a full commented # version at /usr/share/doc/packages/samba/examples/smb.conf.SUSE if the # samba-doc package is installed. # Date: 2011-11-02 [global] workgroup = WORKGROUP passdb backend = tdbsam printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User include = /etc/samba/dhcp.conf logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes [homes] comment = Home Directories valid users = %S, %D%w%S browseable = No read only = No inherit acls = Yes [profiles] comment = Network Profiles Service path = %H read only = No store dos attributes = Yes create mask = 0600 directory mask = 0700 [users] comment = All users path = /home read only = No inherit acls = Yes veto files = /aquota.user/groups/shares/ [groups] comment = All groups path = /home/groups read only = No inherit acls = Yes [printers] comment = All Printers path = /var/tmp printable = Yes create mask = 0600 browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/drivers write list = @ntadmin root force group = ntadmin create mask = 0664 directory mask = 0775 [allusers] comment = All Users path = /home/shares/allusers valid users = @users force group = users create mask = 0660 directory mask = 0771 writable = yes

    Read the article

  • Why is IIS Anonymous authentication being used with administrative UNC drive access?

    - by Mark Lindell
    My account is local administrator on my machine. If I try to browse to a non-existent drive letter on my own box using a UNC path name: \mymachine\x$ my account would get locked out. I would also get the following warning (Event ID 100, Type “Warning”) 5 times under the “System” group in Event Viewer on my box: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: Logon failure: unknown user name or bad password. I would also get the following warning 3 times: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: The referenced account is currently locked out and may not be logged on to. On the domain controller, Event ID 680 of type “Failure Audit” would appear 4 times under the “Security” group in Event Viewer: Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon account: myaccount Followed by Event ID 644: User Account Locked Out: Target Account Name: myaccount Target Account ID: OURDOMAIN\myaccount Caller Machine Name: MYMACHINE Caller User Name: STAN$ Caller Domain: OURDOMAIN Caller Logon ID: (0x0,0x3E7) Followed by another 4 errors having Event ID 680. Strangely, every time I tried to browse to the UNC path I would be prompted for a user name and password, the above errors would be written to the log, and my account would be locked out. When I hit “Cancel” in response to the user name/password prompt, the following message box would display: Windows cannot find \mymachine\x$. Check the spelling and try again, or try searching for the item by clicking the Start button and then clicking Search. I checked with others in the group using XP and they only got the above message box when browsing to a “bad” drive letter on their box. No one else was prompted for a user name/password and then locked out. So, every time I tried to browse to the “bad” drive letter, behind the scenes XP was trying to login 8 times using bad credentials (or, at least a bad password as the login was correct), causing my account to get locked out on the 4th try. Interestingly, If I tried browsing to a “good” drive such as “c$” it would work fine. As a test, I tried logging on to my box as a different login and browsing the “bad” UNC path. Strangely, my “ourdomain\myaccount” account was getting locked out – not the one I was logged in as! I was totally confused as to why the credentials for the other login were being passed. After much Googling, I found a link referring to some IIS settings I was vaguely familiar with from the past but could not see how they would affect this issue. It was related to the IIS directory security setting “Anonymous access and authentication control” located under: Control Panel/Administrative Tools/Computer Management/Services and Applications/Internet Information Services/Web Sites/Default Web Site/Properties/Directory Security/Anonymous access and authentication control/Edit/Password I found no indication while scouring the Internet that this property was related to my UNC problem. But, I did notice that this property was set to my domain user name and password. And, my password did age recently but I had not reset the password accordingly for this property. Sure enough, keying in the new password corrected the problem. I was no longer prompted for a user name/password when browsing the UNC path and the account lock-outs ceased. Now, a couple of questions: Why would an IIS setting affect the browsing of a UNC path on a local box? Why had I not encountered this problem before? My password has aged several times and I’ve never encountered this problem. And, I can’t remember the last time I updated the “Anonymous access” IIS password it’s been so long. I’ve run the script after a password reset before and never had my account locked-out due to the UNC problem (the script accesses UNC paths as a normal part of its processing). Windows Update did install “Cumulative Security Update for Internet Explorer 7 for Windows XP (KB972260)” on my box on 7/29/2009. I wonder if this is responsible.

    Read the article

  • Setup access to SAS RAID drives with NTFS partitions on CentOS Machine

    - by Quanano
    We have a Dell Poweredge 2900 system with Adaptec 39320A SCSI CONTROLLER CARD and 4 SAS hard drives attached, with NTFS partitions on them. We installed CentOS on the other raid array with a different controller and it is working fine. We are now trying to access the drives shown above and they are not being shown in /dev as sdb, etc. sda is the drive that we installed centos on and it has sda1, sda2, sda3, etc. The CDROM has been picked up as well. If I scan for scsi devices then the perc and adaptec controllers are both found. sg0 is the CDROM and sg2 is the centos installed, however I think sg1 is the other drive but I cannot see anyway to mount the partitions, as only the drive is listed in /dev. Thanks. EXTRA INFO fdisk -l Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x11e3119f Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 8845 70528000 8e Linux LVM Disk /dev/mapper/vg_lal2server-lv_root: 34.4 GB, 34431041536 bytes 255 heads, 63 sectors/track, 4186 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_root doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_swap: 21.1 GB, 21139292160 bytes 255 heads, 63 sectors/track, 2570 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_swap doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_home: 16.6 GB, 16647192576 bytes 255 heads, 63 sectors/track, 2023 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_home doesn't contain a valid partition table These are all from the install hdd not the additional hard drives modprobe a320raid FATAL: Module a320raid not found. lsscsi -v [0:0:0:0] cd/dvd TSSTcorp CDRWDVD TS-H492C DE02 /dev/sr0 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0] [4:0:10:0] enclosu DP BACKPLANE 1.05 - dir: /sys/bus/scsi/devices/4:0:10:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:0:10/4:0:10:0] [4:2:0:0] disk DELL PERC 5/i 1.03 /dev/sda dir: /sys/bus/scsi/devices/4:2:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:2:0/4:2:0:0] . lsmod Module Size Used by fuse 66285 0 des_generic 16604 0 ecb 2209 0 md4 3461 0 nls_utf8 1455 0 cifs 278370 0 autofs4 26888 4 ipt_REJECT 2383 0 ip6t_REJECT 4628 2 nf_conntrack_ipv6 8748 2 nf_defrag_ipv6 12182 1 nf_conntrack_ipv6 xt_state 1492 2 nf_conntrack 79453 2 nf_conntrack_ipv6,xt_state ip6table_filter 2889 1 ip6_tables 19458 1 ip6table_filter ipv6 322029 31 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6 bnx2 79618 0 ses 6859 0 enclosure 8395 1 ses dcdbas 9219 0 serio_raw 4818 0 sg 30124 0 iTCO_wdt 13662 0 iTCO_vendor_support 3088 1 iTCO_wdt i5000_edac 8867 0 edac_core 46773 3 i5000_edac i5k_amb 5105 0 shpchp 33482 0 ext4 364410 3 mbcache 8144 1 ext4 jbd2 88738 1 ext4 sd_mod 39488 3 crc_t10dif 1541 1 sd_mod sr_mod 16228 0 cdrom 39771 1 sr_mod megaraid_sas 77090 2 aic79xx 129492 0 scsi_transport_spi 26151 1 aic79xx pata_acpi 3701 0 ata_generic 3837 0 ata_piix 22846 0 radeon 1023359 1 ttm 70328 1 radeon drm_kms_helper 33236 1 radeon drm 230675 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5762 1 radeon i2c_core 31276 4 radeon,drm_kms_helper,drm,i2c_algo_bit dm_mirror 14101 0 dm_region_hash 12170 1 dm_mirror dm_log 10122 2 dm_mirror,dm_region_hash dm_mod 81500 11 dm_mirror,dm_log

    Read the article

  • Building KPIs to monitor your business Its not really about the Technology

    When I have discussions with people about Business Intelligence, one of the questions the inevitably come up is about building KPIs and how to accomplish that. From a technical level the concept of a KPI is very simple, almost too simple in that it is like the tip of an iceberg floating above the water. The key to that iceberg is not really the tip, but the mass of the iceberg that is hidden beneath the surface upon which the tip sits. The analogy of the iceberg is not meant to indicate that the foundation of the KPI is overly difficult or complex. The disparity in size in meant to indicate that the larger thing that needs to be defined is not the technical tip, but the underlying business definition of what the KPI means. From a technical perspective the KPI consists of primarily the following items: Actual Value This is the actual value data point that is being measured. An example would be something like the amount of sales. Target Value This is the target goal for the KPI. This is a number that can be measured against Actual Value. An example would be $10,000 in monthly sales. Target Indicator Range This is the definition of ranges that define what type of indicator the user will see comparing the Actual Value to the Target Value. Most often this is defined by stoplight, but can be any indicator that is going to show a status in a quick fashion to the user. Typically this would be something like: Red Light = Actual Value more than 5% below target; Yellow Light = Within 5% of target either direction; Green Light = More than 5% higher than Target Value Status\Trend Indicator This is an optional attribute of a KPI that is typically used to show some kind of trend. The vast majority of these indicators are used to show some type of progress against a previous period. As an example, the status indicator might be used to show how the monthly sales compare to last month. With this type of indicator there needs to be not only a definition of what the ranges are for your status indictor, but then also what value the number needs to be compared against. So now we have an idea of what data points a KPI consists of from a technical perspective lets talk a bit about tools. As you can see technically there is not a whole lot to them and the choice of technology is not as important as the definition of the KPIs, which we will get to in a minute. There are many different types of tools in the Microsoft BI stack that you can use to expose your KPI to the business. These include Performance Point, SharePoint, Excel, and SQL Reporting Services. There are pluses and minuses to each technology and the right technology is based a lot on your goals and how you want to deliver the information to the users. Additionally, there are other non-Microsoft tools that can be used to expose KPI indicators to your business users. Regardless of the technology used as your front end, the heavy lifting of KPI is in the business definition of the values and benchmarks for that KPI. The discussion about KPIs is very dependent on the history of an organization and how much they are exposed to the attributes of a KPI. Often times when discussing KPIs with a business contact who has not been exposed to KPIs the discussion tends to also be a session educating the business user about what a KPI is and what goes into the definition of a KPI. The majority of times the business user has an idea of what their actual values are and they have been tracking those numbers for some time, generally in Excel and all manually. So they will know the amount of sales last month along with sales two years ago in the same month. Where the conversation tends to get stuck is when you start discussing what the target value should be. The actual value is answering the What and How much questions. When you are talking about the Target values you are asking the question Is this number good or bad. Typically, the user will know whether or not the value is good or bad, but most of the time they are not able to quantify what is good or bad. Their response is usually something like I just know. Because they have been watching the sales quantity for years now, they can tell you that a 5% decrease in sales this month might actually be a good thing, maybe because the salespeople are all waiting until next month when the new versions come out. It can sometimes be very hard to break the business people of this habit. One of the fears generally is that the status indicator is not subjective. Thus, in the scenario above, the business user is going to be fearful that their boss, just looking at a negative red indicator, is going to haul them out to the woodshed for a bad month. But, on the flip side, if all you are displaying is the amount of sales, only a person with knowledge of last month sales and the target amount for this month would have any idea if $10,000 in sales is good or not. Here is where a key point about KPIs needs to be communicated to both the business user and any user who might be viewing the results of that KPI. The KPI is just one tool that is used to report on business performance. The KPI is meant as a quick indicator of one business statistic. It is not meant to tell the entire story. It does not answer the question Why. Its primary purpose is to objectively and quickly expose an area of the business that might warrant more review. There is always going to be the need to do further analysis on any potential negative or neutral KPI. So, hopefully, once you have convinced your business user to come up with some target numbers and ranges for status indicators, you then need to take the next step and help them answer the Why question. The main question here to ask is, Okay, you see the indicator and you need to discover why the number is what is, where do you go?. The answer is usually a combination of sources. A sales manager might have some of the following items at their disposal (Marketing report showing a decrease in the promotional discounts for the month, Pricing Report showing the reduction of prices of older models, an Inventory Report showing the discontinuation of a particular product line, or a memo showing the ending of a large affiliate partnership. The answers to the question Why are never as simple as a single indicator value. Bring able to quickly get to this information is all about designing how a user accesses the KPIs and then also how easily they can get to the additional information they need. This is where a Dashboard mentality can come in handy. For example, the business user can have a dashboard that shows their KPIs, but also has links to some of the common reports that they run regarding Sales Data. The users boss may have the same KPIs on their dashboard, but instead of links to individual reports they are going to have a link to a status report that was created by the user that pulls together all the data about the KPI in a summary format the users boss can review. So some of the key things to think about when building or evaluating KPIs for your organization: Technology should not be the driving factor KPIs are of little value without some indicator for whether a value is good, bad or neutral. KPIs only give an answer to the Is this number good\bad? question Make sure the ability to drill into the Why of a KPI is close at hand and relevant to the user who is viewing the KPI. The KPI is a key business tool when defined properly to help monitor business performance across the enterprise in an objective and consistent manner. At times it might feel like the process of defining the business aspects of a KPI can sometimes be arduous, the payoff in the end can far outweigh the costs. Some of the benefits of going through this process are a better understanding of the key metrics for an organization and the measure of those metrics and a consistent snapshot of business performance that can be utilized across the organization. And I think that these are benefits to any organization regardless of the technology or the implementation.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • A Good Developer is So Hard to Find

    - by James Michael Hare
    Let me start out by saying I want to damn the writers of the Toughest Developer Puzzle Ever – 2. It is eating every last shred of my free time! But as I've been churning through each puzzle and marvelling at the brain teasers and trivia within, I began to think about interviewing developers and why it seems to be so hard to find good ones.  The problem is, it seems like no matter how hard we try to find the perfect way to separate the chaff from the wheat, inevitably someone will get hired who falls far short of expectations or someone will get passed over for missing a piece of trivia or a tricky brain teaser that could have been an excellent team member.   In shops that are primarily software-producing businesses or other heavily IT-oriented businesses (Microsoft, Amazon, etc) there often exists a much tighter bond between HR and the hiring development staff because development is their life-blood. Unfortunately, many of us work in places where IT is viewed as a cost or just a means to an end. In these shops, too often, HR and development staff may work against each other due to differences in opinion as to what a good developer is or what one is worth.  It seems that if you ask two different people what makes a good developer, often you will get three different opinions.   With the exception of those shops that are purely development-centric (you guys have it much easier!), most other shops have management who have very little knowledge about the development process.  Their view can often be that development is simply a skill that one learns and then once aquired, that developer can produce widgets as good as the next like workers on an assembly-line floor.  On the other side, you have many developers that feel that software development is an art unto itself and that the ability to create the most pure design or know the most obscure of keywords or write the shortest-possible obfuscated piece of code is a good coder.  So is it a skill?  An Art?  Or something entirely in between?   Saying that software is merely a skill and one just needs to learn the syntax and tools would be akin to saying anyone who knows English and can use Word can write a 300 page book that is accurate, meaningful, and stays true to the point.  This just isn't so.  It takes more than mere skill to take words and form a sentence, join those sentences into paragraphs, and those paragraphs into a document.  I've interviewed candidates who could answer obscure syntax and keyword questions and once they were hired could not code effectively at all.  So development must be more than a skill.   But on the other end, we have art.  Is development an art?  Is our end result to produce art?  I can marvel at a piece of code -- see it as concise and beautiful -- and yet that code most perform some stated function with accuracy and efficiency and maintainability.  None of these three things have anything to do with art, per se.  Art is beauty for its own sake and is a wonderful thing.  But if you apply that same though to development it just doesn't hold.  I've had developers tell me that all that matters is the end result and how you code it is entirely part of the art and I couldn't disagree more.  Yes, the end result, the accuracy, is the prime criteria to be met.  But if code is not maintainable and efficient, it would be just as useless as a beautiful car that breaks down once a week or that gets 2 miles to the gallon.  Yes, it may work in that it moves you from point A to point B and is pretty as hell, but if it can't be maintained or is not efficient, it's not a good solution.  So development must be something less than art.   In the end, I think I feel like development is a matter of craftsmanship.  We use our tools and we use our skills and set about to construct something that satisfies a purpose and yet is also elegant and efficient.  There is skill involved, and there is an art, but really it boils down to being able to craft code.  Crafting code is far more than writing code.  Anyone can write code if they know the syntax, but so few people can actually craft code that solves a purpose and craft it well.  So this is what I want to find, I want to find code craftsman!  But how?   I used to ask coding-trivia questions a long time ago and many people still fall back on this.  The thought is that if you ask the candidate some piece of coding trivia and they know the answer it must follow that they can craft good code.  For example:   What C++ keyword can be applied to a class/struct field to allow it to be changed even from a const-instance of that class/struct?  (answer: mutable)   So what do we prove if a candidate can answer this?  Only that they know what mutable means.  One would hope that this would infer that they'd know how to use it, and more importantly when and if it should ever be used!  But it rarely does!  The problem with triva questions is that you will either: Approve a really good developer who knows what some obscure keyword is (good) Reject a really good developer who never needed to use that keyword or is too inexperienced to know how to use it (bad) Approve a really bad developer who googled "C++ Interview Questions" and studied like hell but can't craft (very bad) Many HR departments love these kind of tests because they are short and easy to defend if a legal issue arrises on hiring decisions.  After all it's easy to say a person wasn't hired because they scored 30 out of 100 on some trivia test.  But unfortunately, you've eliminated a large part of your potential developer pool and possibly hired a few duds.  There are times I've hired candidates who knew every trivia question I could throw out them and couldn't craft.  And then there are times I've interviewed candidates who failed all my trivia but who I took a chance on who were my best finds ever.    So if not trivia, then what?  Brain teasers?  The thought is, these type of questions measure the thinking power of a candidate.  The problem is, once again, you will either: Approve a good candidate who has never heard the problem and can solve it (good) Reject a good candidate who just happens not to see the "catch" because they're nervous or it may be really obscure (bad) Approve a candidate who has studied enough interview brain teasers (once again, you can google em) to recognize the "catch" or knows the answer already (bad). Once again, you're eliminating good candidates and possibly accepting bad candidates.  In these cases, I think testing someone with brain teasers only tests their ability to answer brain teasers, not the ability to craft code. So how do we measure someone's ability to craft code?  Here's a novel idea: have them code!  Give them a computer and a compiler, or a whiteboard and a pen, or paper and pencil and have them construct a piece of code.  It just makes sense that if we're going to hire someone to code we should actually watch them code.  When they're done, we can judge them on several criteria: Correctness - does the candidate's solution accurately solve the problem proposed? Accuracy - is the candidate's solution reasonably syntactically correct? Efficiency - did the candidate write or use the more efficient data structures or algorithms for the job? Maintainability - was the candidate's code free of obfuscation and clever tricks that diminish readability? Persona - are they eager and willing or aloof and egotistical?  Will they work well within your team? It may sound simple, or it may sound crazy, but when I'm looking to hire a developer, I want to see them actually develop well-crafted code.

    Read the article

  • SQL Server MCM is too easy, is it?

    - by simonsabin
    We all know that Brent Ozar did the MCM training/certification over the past few weeks. He wrote an interesting article on Friday about the bad bits ( http://www.brentozar.com/archive/2010/04/sql-mcm-now-bad-stuff/ ) of the training and it lead me to thinking about the certification process again(I often think about it, and it appears often in response to something from Brent http://sqlblogcasts.com/blogs/simons/archive/2010/02/12/Whats-missing-in-the-SQL-Certification-process-.aspx ) This time what...(read more)

    Read the article

  • How to mount ext4 partition?

    - by Flint
    How do I mount an ext4 partition as my user account so I wouldn't require root access to r/w on it? I used -o uid=flint,gid=flint on the mount command but I keep getting mount: wrong fs type, bad option, bad superblock on /dev/sda7, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Another thing, I want avoid using udisks for now as it doesn't let me mount to my specified mount point name.

    Read the article

  • Was I wrong about JavaScript?

    - by jboyer
    Yes, I was. Recently, I’ve taken a good hard look at JavaScript. I’ve used it before but mostly in the capacity of web design. Using JQuery to make your web page do cool stuff is different than really creating a JavaScript application using all of the language constructs. What I’m finding as I use it more is that I may have been wrong about my assumptions about it. Let me explain.   I enjoyed doing cool stuff with JQuery but the limited experience with JavaScript as a language coupled with the bad things that I heard about it led me to not have any real interest in it. However, JavaScript is ubiquitous on the web and if I want to do any web development, which I do, I need to learn it. So here I am, diving deep into the language with the help of the JavaScript Fundamentals training course at Pluralsight (great training for a low price) and the JavaScript: The Good Parts book by Douglas Crockford.   Now, there are certainly parts of JavaScript that are bad. I think these are well known by any developer that uses it. The parts that I feel are especially egregious are the following: The global object null vs. undefined truthy and falsy limited (nearly nonexistent) scoping ‘==’ and ‘===’ (I just don’t get the reason for coercion)   However, what I am finding hiding under the covers of the bad things is a good language. I am finding that I am legitimately enjoying JavaScript. This I was not expecting. I’m not going to go into a huge dissertation on what I like about it, but some things include: Object literal notation dynamic typing functional style (JavaScript: The Good Parts describes it as LISP in C clothing) JSON (better than XML) There are parts of JavaScript that seem strange to OOP developers like myself. However, just because it is different or seems strange does not mean it is bad. Some differences are quite interesting and useful.   I feel that it is important for developers to challenge their assumptions and also to be able to admit when they are wrong on a topic. Many different situations can arise that lead to this, such as choosing the wrong technology for a problem’s solution, misunderstanding the requirements, etc. I decided to challenge my assumptions about JavaScript instead of moving straight into CoffeeScript or Dart. After exploring it, I find that I am beginning to enjoy it the more I use it. As long as there are those like Crockford to help guide me in the right way to code in JavaScript, I can create elegant and efficient solutions to problems and add another ‘arrow’ to the ‘quiver’, so to speak. I do still intend to learn CoffeeScript to see what the hub-bub is about, but now I no longer have to be afraid of JavaScript as a legitimate programming language.   Has something similar ever happened to you? Tell me about it in the comments below.

    Read the article

  • The cost of Programmer Team Clustering

    - by MarkPearl
    I recently was involved in a conversation about the productivity of programmers and the seemingly wide range in abilities that different programmers have in this industry. Some of the comments made were reiterated a few days later when I came across a chapter in Code Complete (v2) where it says "In programming specifically, many studies have shown order-of-magnitude differences in the quality of the programs written, the sizes of the programs written, and the productivity of programmers". In line with this is another comment presented by Code Complete when discussing teams - "Good programmers tend to cluster, as do bad programmers". This is something I can personally relate to. I have come across some really good and bad programmers and 99% of the time it turns out the team they work in is the same - really good or really bad. When I have found a mismatch, it hasn't stayed that way for long - the person has moved on, or the team has ejected the individual. Keeping this in mind I would like to comment on the risks an organization faces when forcing teams to remain together regardless of the mix. When you have the situation where someone is not willing to be part of the team but still wants to get a pay check at the end of each month, it presents some interesting challenges and hard decisions to make. First of all, when this occurs you need to give them an opportunity to change - for someone to change, they need to know what the problem is and what is expected. It is unreasonable to expect someone to change but have not indicated what they need to change and the consequences of not changing. If after a reasonable time of an individual being aware of the problem and not making an effort to improve you need to do two things... Follow through with the consequences of not changing. Consider the impact that this behaviour will have on the rest of the team. What is the cost of not following through with the consequences? If there is no follow through, it is often an indication to the individual that they can continue their behaviour. Why should they change if you don't care enough to keep your end of the agreement? In many ways I think it is very similar to the "Broken Windows" principles – if you allow the windows to break and don’t fix them, more will get broken. What is the cost of keeping them on? When keeping a disruptive influence in a team you risk loosing the good in the team. As Code Complete says, good and bad programmers tend to cluster - they have a tendency to keep this balance - if you are not going to help keep the balance they will. The cost of not removing a disruptive influence is that the good in the team will eventually help you maintain the clustering themselves by leaving.

    Read the article

  • WebCenter Customer Spotlight: Sberbank of Russia

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummarySberbank of Russia is the largest credit institution in Russia and the Commonwealth of Independent States (CIS), accounting for 27% of Russian banking assets and 26% of Russian banking capital.Sberbank of Russia needed to increase business efficiency and employee productivity due to the growth in its corporate clientele from 1.2 million to an estimated 1.6 million.Sberbank of Russia deployed Oracle’s Siebel Customer Relationship Management (CRM) applications to create a single client view, optimize client communication, improve efficiency, and automate distressed asset processing. Based on Oracle WebCenter Content, they implemented an enterprise content management system for documents, unstructured content storage and search, which became an indispensable service across the organization and in the board room business results. Sberbank of Russia consolidated borrower information across the entire organization into a single repository to obtain, for the first time, a single view on the bank’s borrowers. With the implemented solution they reducing the amount of bad debt significantly. Company OverviewSberbank of Russia is the largest credit institution in Russia and the Commonwealth of Independent States (CIS), accounting for 27% of Russian banking assets and 26% of Russian banking capital. In 2010, it ranked 43rd in the world for Tier 1 capital. Business ChallengesSberbank of Russia needed to increase business efficiency and employee productivity due to the growth in its corporate clientele from 1.2 million to an estimated 1.6 million. It also wanted to automate distressed asset management to reduce the number of corporate clients’ bad debts. As part of their business strategy they wanted to drive high-quality, competitive customer services by simplifying client communication processes and enabling personnel to quickly access client information Solution deployedSberbank of Russia deployed Oracle’s Siebel Customer Relationship Management (CRM) applications to create a single client view, optimize client communication, improve efficiency, and automate distressed asset processing. Based on Oracle WebCenter Content, they implemented an enterprise content management system for documents, unstructured content storage and search which became an indispensable service across the organization and in the board room business results. Business ResultsSberbank of Russia consolidated borrower information across the entire organization into a single repository to obtain, for the first time, a single view on the bank’s borrowers. They monitored 103,000 client transactions and 32,000 bank cards with credit collection issues (100% of Sberbank’s bad borrowers) reducing the amount of bad debt significantly. “Innovation and client service are the foundation of our business strategy. Oracle’s Siebel CRM applications helped advance our objectives by enabling us to deliver faster, more personalized service while managing and tracking distressed assets.” A.B. Sokolov, Head of Center of Business Administration and Customer Relationship Management, Sberbank of Russia Additional Information Sberbank of Russia Customer Snapshot Oracle WebCenter Content Siebel Customer Relationship Management 8.1 Oracle Business Intelligence, Enterprise Edition 11g

    Read the article

  • Good SEO - Why is Good SEO Better Than Its Evil 'Black Hat' Brother?

    With the rise of today's technology, a number of bad and dodgy practices have come into play that can seriously put your website at risk from being banned in the search engines - no questions asked. The constant shift of techniques and 'state of flux' the search engines remain in makes it difficult to differentiate between good and bad SEO.

    Read the article

  • Circular class dependency

    - by shad0w
    Is it bad design to have 2 classes which need each other? I'm writing a small game in which I have a GameEngine class which has got a few GameState objects. To access several rendering methods, these GameState objects also need to know the GameEngine class - so it's a circular dependency. Would you call this bad design? I am just asking, because I am not quite sure and at this time I am still able to refactor these things.

    Read the article

  • Hardware problem

    - by Ajay0990
    Guys I need help to recover my external hard disk. Im using SEGATE FREEAGENT GO 320gb HDD. Recently I tried to format it using command line in win7, but accidentally I removed the hdd before the format is complete and I cannot open it and I tried to recover data using as many software's as I can but no use I have max of 25000 bad sectors. Can i still recover my hdd? Is there any way to recover my HDD with max bad sectors using Linux?

    Read the article

  • How do you install a USB CD Rom drive?

    - by Matt Allen
    Hello, I recently purchased a USB CD ROM drive, but I don't know how to get it to work with my computer which runs Ubuntu 10.04. http://www.amazon.com/gp/product/B00303H908/ref=oss_product When I issue the lsusb command, it shows up as: Bus 002 Device 016: ID 05e3:0701 Genesys Logic, Inc. USB 2.0 IDE Adapter The computer doesn't recognize it automatically. How can I get this drive to show up as an actual drive on my computer? If this particular drive can't handle Linux, can you recommended one which can and provide a link to it so I can purchase it? Thanks! Update: I was asked by Scaine to run a command and report back with the output: joe@joe-laptop:~$ tail -f /var/log/kern.log Dec 29 12:51:35 joe-laptop kernel: [103190.551437] sr 7:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 12:51:35 joe-laptop kernel: [103190.551446] sr 7:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 12:51:35 joe-laptop kernel: [103190.551463] end_request: I/O error, dev sr1, sector 0 Dec 29 12:51:35 joe-laptop kernel: [103190.877542] sr 7:0:0:0: [sr1] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Dec 29 12:51:35 joe-laptop kernel: [103190.877551] sr 7:0:0:0: [sr1] Sense Key : Illegal Request [current] Dec 29 12:51:35 joe-laptop kernel: [103190.877559] Info fld=0x0, ILI Dec 29 12:51:35 joe-laptop kernel: [103190.877562] sr 7:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 12:51:35 joe-laptop kernel: [103190.877572] sr 7:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 12:51:35 joe-laptop kernel: [103190.877588] end_request: I/O error, dev sr1, sector 0 Dec 29 13:08:46 joe-laptop kernel: [104221.558911] usb 2-2.2: USB disconnect, address 16 Then when I plugged the drive back into the computer, I got: Dec 29 13:10:29 joe-laptop kernel: [104324.668320] usb 2-2.2: new high speed USB device using ehci_hcd and address 17 Dec 29 13:10:29 joe-laptop kernel: [104324.761702] usb 2-2.2: configuration #1 chosen from 1 choice Dec 29 13:10:29 joe-laptop kernel: [104324.762700] scsi8 : SCSI emulation for USB Mass Storage devices Dec 29 13:10:29 joe-laptop kernel: [104324.762935] usb-storage: device found at 17 Dec 29 13:10:29 joe-laptop kernel: [104324.762938] usb-storage: waiting for device to settle before scanning Dec 29 13:10:34 joe-laptop kernel: [104329.760521] usb-storage: device scan complete Dec 29 13:10:34 joe-laptop kernel: [104329.761344] scsi 8:0:0:0: CD-ROM TEAC CD-224E 1.7A PQ: 0 ANSI: 0 CCS Dec 29 13:10:34 joe-laptop kernel: [104329.767425] sr1: scsi3-mmc drive: 24x/24x cd/rw xa/form2 cdda tray Dec 29 13:10:34 joe-laptop kernel: [104329.767612] sr 8:0:0:0: Attached scsi CD-ROM sr1 Dec 29 13:10:34 joe-laptop kernel: [104329.767720] sr 8:0:0:0: Attached scsi generic sg2 type 5 Dec 29 13:10:34 joe-laptop kernel: [104330.141060] sr 8:0:0:0: [sr1] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Dec 29 13:10:34 joe-laptop kernel: [104330.141069] sr 8:0:0:0: [sr1] Sense Key : Illegal Request [current] Dec 29 13:10:34 joe-laptop kernel: [104330.141077] Info fld=0x0, ILI Dec 29 13:10:34 joe-laptop kernel: [104330.141081] sr 8:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 13:10:34 joe-laptop kernel: [104330.141090] sr 8:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 13:10:34 joe-laptop kernel: [104330.141106] end_request: I/O error, dev sr1, sector 0 Dec 29 13:10:34 joe-laptop kernel: [104330.141113] __ratelimit: 18 callbacks suppressed There was more output than this (the number of lines started growing after the drive was plugged back in, and kept growing), but this is the first few lines.

    Read the article

  • Un balance del XXI Congreso de la Comunidad de Usuarios de Oracle

    - by Fabian Gradolph
    La XXI edición del Congreso de CUORE (Comunidad de Usuarios de Oracle) se clausuró el miércoles pasado tras dos intensos días de conferencias, talleres, reuniones y mesas redondas. Los más de 600 asistentes son una buena muestra del gran interés que despiertan las propuestas tecnológicas de Oracle entre nuestros clientes. Big Data y el sector utilities fueron dos de los grandes protagonistas del Congreso. El evento fue inaugurado por Félix del Barrio (en la segunda foto por la izquierda), director general de Oracle en España. Una buena parte del evento, la mañana del martes, estuvo dedicada a Big Data. Con Andrew Sutherland, Vicepresidente Senior de Tecnología de Oracle en EMEA, haciendo la presentación principal, para dar paso después a sesiones específicas sobre las tecnologías necesarias en las diferentes fases de los proyectos Big Data (obtener los datos, organizarlos, analizarlos y, finalmente, tomar las decisiones de negocio correspondientes). No nos vamos a entretener explicando qué es Big Data, un tema que ya hemos tratado previamente en este blog (aquí y aquí), pero sí hay que llamar la atención sobre un tema que Andrew Sutherland puso sobre la mesa en una reunión con periodistas: los proyectos relacionados con los Big Data tienen sentido pleno si nos sirven para modificar procesos y modelos de negocio, de forma que incrementemos la eficacia de la organización. Si nuestra organización está basada en procesos rígidos e inmutables (lo que tiene que ver esencialmente con el tipo de aplicaciones que estén implementadas), el aprovechamiento de los Big Data será limitado. En otras palabras, Big Data es un impulsor del cambio en las organizaciones. Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Los retos a los que se enfrenta un sector como el energético ocuparon el segundo día del Congreso. Las tendencias de la industria como las Redes Inteligentes, el Smart Metering, la entrada de nuevos actores y distribuidores en el mercado, la atomización de las operadoras y las inversiones congeladas son el panorama que se dibuja para las compañías del sector utilities . Además de los grandes eventos (Big Data y Oracle Utilities Day), las dos jornadas del Congreso sirvieron para que aquellos partners de Oracle que lo desearan recibieran la certificación gratuita de sus profesionales en diversas jornadas de examen. Adicionalmente, se desarrollaron sesiones paralelas sobre tecnologías y visiones estratégicas, demostraciones de producto y casos de éxito. En resumen, el balance del XXI Congreso de CUORE es muy positivo para Oracle, para nuestros clientes y para nuestros partners. Os esperamos a todos el próximo año.

    Read the article

  • BI&EPM in Focus November 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Customers ·       San Diego Unified School District Harnesses Attendance, Procurement, and Operational Data with Oracle Exalytics, Generating $4.4 Million in Savings ·       NilsonGroup chooses Oracle Exalytics In-Memory Machine as their solution to access critical data to keep its stores competitive with real-time Mobile BI:  Video ·       Nykredit, in the Danish Financial Sector, describes their experiences from testing the Exalytics Business Intelligence Machine: Video  ·       Sodexo chose Oracle Exalytics as their business analytics platform:  Video ·       AstraZeneca (US, Canada, MedImmune) Improves Insight, Analytics, and Reporting, Enterprisewide with Unified Planning on a Single Platform ·       Experian Consolidates Reporting Systems for One, Global View of Financial Data and Improves Planning for Continued Growth ·       Munchkin Gives its Line of Children’s Products Plenty of Room to Grow in an Upgraded Enterprise Application Environment ·       Top 20 EPM Customer Snapshots, in One Handy Booklet (link) ·       Customer and Partner Successes: Link to Complete Archive Enterprise Performance Management ·       Nov 15: Is Hope and Email the Core of your Reconciliation Process? (link) ·       Replay: Integrated Business Planning, Featuring Leggett & Platt (link) ·       Whitepaper: The New Competitive Advantage - Strategic CIO's Embrace the Cloud (link) ·       Press: Oracle Enterprise Performance Management Driving Significant Improvements in Budget Management and Reporting for Public Sector Organizations (link) ·       Enterprise Performance Management Video Feature Overviews, Now Available on YouTube (link) ·       NEW Solution Brief - Oracle Hyperion Planning on the Oracle Exalytics In-Memory Machine (link) ·       For Insurance sector: Datasheet for new release V2.0 - Oracle Quantitative Management and Reporting for Solvency II (link) ·       Whitepaper FSN 2012: Managing Risk and Uncertainty, an Executive's Guide to Integrated Business Planning (link) ·       NEW Datasheet for Oracle Planning and Budgeting Cloud Service (link) ·       Blog: Planning in the Cloud - For Real Business Intelligence ·       Press: Latest Release of Oracle Exalytics In-Memory Machine Software Enables Customers to View and Analyze Data at the Speed of Business (link) ·       Press: New Release (11.1.1.6.2BP1) of Oracle Business Intelligence Enables Users to Quickly Access and Analyze Key Business Information, Anytime, Anywhere (link) ·       Mark Hurd Interviewed on USA Today about Big Data & Analytics (link) ·       Whitepaper: Mastering Big Data - CFO Strategies to Transform Insight into Opportunity (link) ·       Nov 15: Improve Asset Utilization. Achieve Greater Profitability: Oracle Enterprise Asset Management Analytics (link) ·       Replay: Oracle Enterprise Asset Mgmt Analytics and Oracle Manufacturing Analytics (link) ·       Overload to Impact: An Industry Scorecard on Big Data Business Challenges (link) ·       Webcast Replay: Overview of Oracle Endeca Informational Discovery (link) ·       OBIEE 11g: Required and Recommended Patches and Patch Sets (link)

    Read the article

  • Subversion gives Error 500 until authenticating with a web browser

    - by Farseeker
    We used to use Collabnet SVN/Apache combo on a Windows server with LDAP authentication, and whilst the performance wasn't brilliant it used to work perfectly. After switching to a fresh Ubuntu 10 install, and setting up an Apache/SVN/LDAP configuration, we have HTTPS access to our repositories, using Active Directory authentication via LDAP. We're now having a very peculiar issue. Whenever a new user accesses a repository, our SVN clients (we have a few depending on the tool, but for arguments sake, let's stick to Tortoise SVN) report "Error 500 - Unknown Response". To get around this, we have to log into the repo using a web browser and navigate 'backwards' until it works E.G: SVN Checkout https://svn.example.local/SVN/MyRepo/MyModule/ - Error 500 (bad) Webbrowse to https://svn.example.local/SVN/MyRepo/MyModule/ - Error 500 (bad) Webbrowse to https://svn.example.local/SVN/MyRepo/ - Error 500 (bad) Webbrowse to https://svn.example.local/SVN/ - Forbidden 403 (correct) Webbrowse to https://svn.example.local/SVN/MyRepo/ - OK 200 (correct) SVN Checkout https://svn.example.local/SVN/MyRepo/MyModule/ - Error 500 (bad) Webbrowse to https://svn.example.local/SVN/MyRepo/MyModule/ - OK 200 (correct) SVN Checkout https://svn.example.local/SVN/MyRepo/MyModule/ - OK 200 (correct) It seems to require authentication up the tree, starting from the svnparentpath up through to the module required. Has anyone seen anything like this before? Any ideas on where to start before I ditch it back to Collabnet's SVN server?

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • Why does this loopback device creation malfunction?

    - by user50118
    The stackoverflow people thought this was more appropriate here, I put it there as it is part of a program but I can see their POV, so here it is: At the bottom of the code you can see it failing. In fact, I'll put it here at the start too because it is the problem I need to solve: [350591.924819] EXT4-fs (loop0): bad geometry: block count 9750806 exceeds size of device (9750168 blocks) I don't understand why the device is supposedly too small. I made this partition two days ago with normal fdisk, it was created and formatted with ext4 supplying no options other than the partition (/dev/sdb2) to format. The only explaination I can think of is that ext4 has the size of the partition wrong somehow but that seems very unlikely. What is wrong with my math? The offset is correct, you can see that with the file command, and the size should be correct too because End - Start comes to the same number of sectors minus 1, just like it should (A disk starting on sector 1 and ending on sector 2 would be 2 - 1 = 1 and have two sectors). # sfdisk -luS /dev/sdb Disk /dev/sdb: 9729 cylinders, 255 heads, 63 sectors/track Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/sdb2 78295040 156296384 78001345 83 Linux # losetup -r -f --show -o $((78295040 * 512)) --sizelimit $((78001345 * 512)) /dev/sdb /dev/loop0 # file -s /dev/loop0 /dev/loop0: Linux rev 1.0 ext4 filesystem data (needs journal recovery) (extents) (large files) (huge files) # mount -o ro -t ext4 /dev/loop0 /mnt mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # dmesg | tail -n 1 [350591.924819] EXT4-fs (loop0): bad geometry: block count 9750806 exceeds size of device (9750168 blocks)

    Read the article

  • Is my webserver being abused for banking fraud?

    - by koffie
    Since a few weeks i'm getting a lot of 403 errors from apache in my log files that seem to be related to a bank frauding scheme. The relevant log entries look like this (The ip 1.2.3.4 is one I made up, I did not modify the rest of each line) www.bradesco.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 427 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.bb.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.santander.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.banese.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" the logformat I use is: LogFormat "%V:%p %U %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" The strange thing is that all these domains are domains of banks and 3 out of the 4 domains are also in the list of the bank frauding scheme described on: http://www.abuse.ch/?p=2925 I would really like to know if my server is being abused for bank frauding or not. I suspect not, because it's giving 403 to all requests. But any extra checks that I can do to ensure that my server is not being abused are welcome. I'm also curious on how the "bad guys" expected my server to behave. I.e. are they just expecting my server to act as a proxy to hide the ip of the fake site, or are they expecting that my server will actually serve the fake banking website? Is the ip 1.2.3.4 more likely to be the ip of a victim or the ip of a bad guy. I suspect a bad guy, because it's quite unlikely that a real person would visit 4 bank sites in a second. If it's from a bad guy I'm very curious at what he is trying to do.

    Read the article

  • Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most

    - by Henno
    Problem Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most. Topology Facts ESXi5 version is 5.0.0.504890 VM has the latest Vmware Tools installed VM is using E1000 network driver Physical box has Win Srv 2008 R2 as the OS CrystalDiskMark says the drive on physical box can read/write 100MB/s vCenter is another vm on esx both vm and physical box are showing 1Gbps link speed Configuration Networking shows vmnic0 as 1000 Full NTttcp is a client/server tool from Microsoft for measuring pure network throughput Here's what I've done so far: Test1: VM is running Filezilla FTP Server (default settings, one user account made) Physical box is running Filezilla FTP Client (default settings) Physical box is uploading a big file to FTP server Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Physical box is downloading that file from FTP server Transfer speed (as observed by Windows Task Manager on both machines): still ~11MB/s (bad) Could it be disk performance issue? Test2: Physical box is running ntttcpr.exe -a 6 -m 6,0,VM_IP_ADDRESS VM is running ntttcps.exe -a 6 -m 6,0,PHY_BOX_IP_ADDRESS Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Could it be switch performance issue? Test3: physical box is running vSphere Client I open Summary Storage datastore Browse Datastore... from physical box and upload a file to datastore Transfer speed (as observed by Windows Task Manager on physical box): ~26-36MB/s (good) Could it be a vm specific issue? Test4: Installed ntttcp to another vm on the same esx server Measured network performance between vms on the same esx server with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~90-120MB/s (excellent :) Test5: I have another esx server on the same site, connecting to the same datastore and same switch. Those two ESX servers have both 2 NICs. One NIC goes to switch while the other goes directly to the other ESX server. vMotioned one of the testing vms off to the other ESX host Measured network performance between vms on different esx servers with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~11MB/s (bad) While I'm aware of these: ESXi 4.1 slow file transfer ESXi 5 network performance is slow Debian Etch and ESXi slow network speeds VMWare ESXi slow file copy to guest they did not help (or I must have been missed something)

    Read the article

  • Configure IIS to pass-through CGI output without any conditioning

    - by Daniel Watrous
    I'm building a web service on Windows 2008 R2 with IIS 7.5 and Python 2.5. Right now I have the Handler Mappings and everything else setup just fine, Except that IIS is modifying what it gets back from the CGI script before sending it along the the client. Here's an example: I wrote the following CGI script: # hello.py print "Status: 400 Bad Request" print "Content-Type: text/html" print print "Error Message" According to the HTTP spec this should be fine and a Status of 400 should allow for a description of the error message in the body of the response. When the server response actually comes back to me I get the following: Status: 400 Bad Request Date: Fri, 11 Feb 2011 17:58:30 GMT X-Powered-By: ASP.NET Connection: close Content-Length: 11 Server: Microsoft-IIS/7.5 Content-Type: text/html Bad Request I've seen on this forum and others where I can change or eliminate the X-Powered-By header element, but I would like IIS to leave it alone altogether. I'm not sure why it takes my response, deletes "Error Message" from the body and replaces it with "Bad Request" and then adds all that other junk in. Is there some way to tell IIS to just send the response along without making any changes at all?

    Read the article

  • How to pass a Context when using acts_as_taggable_on for assigning tags

    - by kbjerring
    I am using acts_as_taggable_on for assigning 'fixed' categories as well as free' tags to a Company model. My question is how I can pass the correct tag Context for the two different kinds of tags in the form view. For instance, I would like the user to click all the "sector" categories that apply to a company, and then be freely allowed to add additional tags. At this point, the only way I have gotten this to work is through the company model (inspired by this question): # models/company.rb class Company ... acts_as_taggable_on :sectors, :tags has_many :sector_tags, :through => :taggings, :source => :tag, has_many :taggings, :as => :taggable, :include => :tag, :class_name => "ActsAsTaggableOn::Tagging", :conditions => { :taggable_type => "Company", :context => "sectors" } ... end in the form (using the simple_form gem) I have... # views/companies/_form.html.haml = simple_form_for @company do |f| = f.input :name = f.association :sector_tags, :as => :check_boxes, :hint => "Please click all that apply" = f.input :tag_list But this obviously causes the two tag types ("sectors" and "tags") to be of the same "sectors" context which is not what I want. Can anyone hint at how I can pass the relevant Context ("sectors") in the form where the user assigns the sector tags? Or maybe I can pass it in the "has_many :sector_tags ..." line in the Company model? A related question is if this is a good way to do it at all? Would I be better off just using a Category model for assigning sector tags through a joint model? Thanks!

    Read the article

  • problem drawing gRaphaeljs pie chart

    - by Aswad
    Hi, I was trying to draw the raphaeljs piechart. I used the same example as shown on "http://g.raphaeljs.com/piechart2.html". It renders me the text but the pie charts goes missing.Can someone please help? please find the code below. g·Raphaël Dynamic Pie Chart Demo window.onload = function () { var r = Raphael("holder"); r.g.txtattr.font = "12px 'Fontin Sans', Fontin-Sans, sans-serif"; r.g.text(320, 100, "Interactive Pie Chart Demo").attr({"font-size": 20}); var pie = r.g.piechart(320, 240, 100, [55, 20, 13, 32, 5, 1, 2, 10], {legend: ["%%.%% – Enterprise Users", "IE Users"], legendpos: "west", href: ["http://raphaeljs.com", "http://g.raphaeljs.com"]}); pie.hover(function () { this.sector.stop(); this.sector.scale(1.1, 1.1, this.cx, this.cy); if (this.label) { this.label[0].stop(); this.label[0].scale(1.5); this.label[1].attr({"font-weight": 800}); } }, function () { this.sector.animate({scale: [1, 1, this.cx, this.cy]}, 500, "bounce"); if (this.label) { this.label[0].animate({scale: 1}, 500, "bounce"); this.label[1].attr({"font-weight": 400}); } }); }; </script> </head> <body class="raphael" id="g.raphael.dmitry.baranovskiy.com"> <div id="holder"></div> <p> Pie chart with legend, hyperlinks on two first sectors and hover effect. </p> <p> Demo of <a href="http://g.raphaeljs.com/">g·Raphaël</a> JavaScript library. </p> </body>

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >