Search Results

Search found 1862 results on 75 pages for 'matt luongo'.

Page 3/75 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Non-Relational DBMS Design Resources

    - by Matt Luongo
    Hey guys, As a personal project, I'm looking to build a rudimentary DBMS. I've read the relevant sections in Elmasri & Navathe (5ed), but could use a more focused text. The rub is that I want to play with novel non-relational data models. While a lot of E&N was great- indexing implementation details in particular- the more advanced DBMS implementation was only targeted to a relational model. I could also use something a bit more practical and detail-oriented, with real-world recommendations. I'd like to defer staring at DBMS source for a while if I can. Any ideas?

    Read the article

  • Writing a DBMS in Python

    - by Matt Luongo
    Hey guys, I'm working on a basic DBMS as a pet project and planning to prototype in Python. I figure there's a reason there are only a few Python databases, and my gut agrees that my favorite language will be too slow to act as an honest performing database, but I'm looking forward to using it to learn what I need quickly. Would someone please contradict me? Is Python as ill-suited right now for this sort of thing as I think?

    Read the article

  • My chance to shape our development process/policy

    - by Matt Luongo
    Hey guys, I'm sorry if this is a duplicate, but the question search terms are pretty generic. I work at a small(ish) development firm. I say small, but the company is actually a fair size; however, I'm only the second full-time developer, as most past work has been organized around contractors. I'm in a position to define internal project process and policy- obvious stuff like SCM and unit-testing. Methodology is outside the scope of the document I'm putting together, but I'd really like to push us in a leaner (and maybe even Agile?) direction. I feel like I have plenty of good practice recommendations, but not enough solid motivation to make my document the spirit guide I'd like it to be. I've separated the document into "principles" and "recommendations". Recommendations have been easy to come up with. Use SCM, strive for 1-step, regularly scheduled builds, unit test first, document as you go... Listing the principles that are supposed to be informing these recommendations, though, has been rough. I've come up with "tools work for us; we should never work for tools" and a hazy clause aimed at our QA (which has been overly manual) that I'd like to read "tedium is the root of all evil". I don't want to miss an opportunity with this document to give us a good in-house start and maybe even push us toward Agile. What principles am I missing?

    Read the article

  • Upgraded Ubuntu, all drives in one zpool marked unavailable

    - by Matt Sieker
    I just upgraded Ubuntu 14.04, and I had two ZFS pools on the server. There was some minor issue with me fighting with the ZFS driver and the kernel version, but that's worked out now. One pool came online, and mounted fine. The other didn't. The main difference between the tool is one was just a pool of disks (video/music storage), and the other was a raidz set (documents, etc) I've already attempted exporting and re-importing the pool, to no avail, attempting to import gets me this: root@kyou:/home/matt# zpool import -fFX -d /dev/disk/by-id/ pool: storage id: 15855792916570596778 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://zfsonlinux.org/msg/ZFS-8000-5E config: storage UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas ata-SAMSUNG_HD103SJ_S246J90B134910 UNAVAIL ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 UNAVAIL ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 UNAVAIL The symlinks for those in /dev/disk/by-id also exist: root@kyou:/home/matt# ls -l /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910* /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51* lrwxrwxrwx 1 root root 9 May 27 19:31 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910 -> ../../sdb lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part9 -> ../../sdb9 lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 -> ../../sdd lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part9 -> ../../sdd9 lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 -> ../../sde lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1 -> ../../sde1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part9 -> ../../sde9 Inspecting the various /dev/sd* devices listed, they appear to be the correct ones (The 3 1TB drives that were in a raidz array). I've run zdb -l on each drive, dumping it to a file, and running a diff. The only difference on the 3 are the guid fields (Which I assume is expected). All 3 labels on each one are basically identical, and are as follows: version: 5000 name: 'storage' state: 0 txg: 4 pool_guid: 15855792916570596778 hostname: 'kyou' top_guid: 1683909657511667860 guid: 8815283814047599968 vdev_children: 1 vdev_tree: type: 'raidz' id: 0 guid: 1683909657511667860 nparity: 1 metaslab_array: 33 metaslab_shift: 34 ashift: 9 asize: 3000569954304 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 8815283814047599968 path: '/dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 18036424618735999728 path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1' whole_disk: 1 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 10307555127976192266 path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1' whole_disk: 1 create_txg: 4 features_for_read: Stupidly, I do not have a recent backup of this pool. However, the pool was fine before reboot, and Linux sees the disks fine (I have smartctl running now to double check) So, in summary: I upgraded Ubuntu, and lost access to one of my two zpools. The difference between the pools is the one that came up was JBOD, the other was zraid. All drives in the unmountable zpool are marked UNAVAIL, with no notes for corrupted data The pools were both created with disks referenced from /dev/disk/by-id/. Symlinks from /dev/disk/by-id to the various /dev/sd devices seems to be correct zdb can read the labels from the drives. Pool has already been attempted to be exported/imported, and isn't able to import again. Is there some sort of black magic I can invoke via zpool/zfs to bring these disks back into a reasonable array? Can I run zpool create zraid ... without losing my data? Is my data gone anyhow?

    Read the article

  • Production Access Denied! Who caused this rule anyways?

    - by Matt Watson
    One of the biggest challenges for most developers is getting access to production servers. In smaller dev teams of less than about 5 people everyone usually has access. Then you hire developer #6, he messes something up in production... and now nobody has access. That is how it always starts in small dev teams. I think just about every rule of life there is gets created this way. One person messes it up for the rest of us. Rules are then put in place to try and prevent it from happening again.Breaking the rules is in our nature. In this example it is for good cause and a necessity to support our applications and troubleshoot problems as they arise. So how do developers typically break the rules? Some create their own method to collect log files off servers so they can see them. Expensive log management programs can collect log files, but log files alone are not enough. Centralizing where important errors are logged to is common. Some lucky developers are given production server access by the IT operations team out of necessity. Wait. That's not fair to all developers and knowingly breaks the company rule!  When customers complain or the system is down, the rules go out the window. Commonly lead developers get production access because they are ultimately responsible for supporting the application and may be the only person who knows how to fix it. The problem with only giving lead developers production access is it doesn't scale from a support standpoint. Those key employees become the go to people to help solve application problems, but they also become a bottleneck. They end up spending up to half of their time every day helping resolve application defects, performance problems, or whatever the fire of the day is. This actually the last thing you want your lead developers doing. They should be working on something more strategic like major enhancements to the product. Having production access can actually be a curse if you are the guy stuck hunting down log files all day. Application defects are good tasks for junior developers. They can usually handle figuring out simple application problems. But nothing is worse than being a junior developer who can't figure out those problems and the back log of them grows and grows. Some of them require production server access to verify a deployment was done correctly, verify config settings, view log files, or maybe just restart an application. Since the junior developers don't have access, they end up bugging the developers who do have access or they track down a system admin to help. It can take hours or days to see server information that would take seconds or minutes if they had access of their own. It is very frustrating to the developer trying to solve the problem, the system admin being forced to help, and most importantly your customers who are not happy about the situation. This process is terribly inefficient. Production database access is also important for solving application problems, but presents a lot of risk if developers are given access. They could see data they shouldn't.  They could write queries on accident to update data, delete data, or merely select every record from every table and bring your database to its knees. Since most of the application we create are data driven, it can be very difficult to track down application bugs without access to the production databases.Besides it being against the rule, why don't all developers have access? Most of the time it comes down to security, change of control, lack of training, and other valid reasons. Developers have been known to tinker with different settings to try and solve a problem and in the process forget what they changed and made the problem worse. So it is a double edge sword. Don't give them access and fixing bugs is more difficult, or give them access and risk having more bugs or major outages being created!Matt WatsonFounder, CEOStackifyAgile Support for Agile Developers

    Read the article

  • Windows server 2008 issue

    - by Matt Fitz
    We have 2 domains “pdc1” and “devkc” both are windows 2000 Active Directory domains with a 2-way trust relationship in place., has been this way for years. All of our developer machines are joined to the “devkc” domain but the users log into there accounts on the “pdc1” domain. This all works fine with Windows XP, 2000 and 2003 server. However with Windows Server 2008 the users can only log into the “devkc” domain that the machine is joined to, they can not log into the “pdc1” domain. The following error results: "The security database on this server does not have a computer account for this workstation trust relationship” Any ideas would be greatly appreaciated Thanks Matt Fitz

    Read the article

  • Load balanced asp.net websites and required memory usage

    - by Matt
    Each of my servers has 8Gb RAM and the memory usage hovers around 7Gb. I have a load balancer available to me but at the moment I'm worried that putting my sites through it will cause the platform to fall over. The load balancer would be configured with a sticky round-robin where a new connection is round robin but subsequent connections for the same source ip will remain on the same server (until a limit is reached). Thats all standard stuff. How do I know what memory usage my sites will need across the platform when I put them through the load balancer? Rather than knowing that a site is using 150mb on a particular server I could face a situation where the 150mb is taken up on each of the servers. I know that with only 1 gb free I could have a serious problem on my hands. If I free up some memory then how can I work out what I need to have free to prevent this from happening? Thanks Matt

    Read the article

  • determining trustee of directories on novell netware volume

    - by Matt Delves
    Currently there are a lot of directories (user home directories that may no longer exist) on a netware volume. As this number is significant, I'm in need of an easy way of determining if there are any trustee's (existing users who have permissions to the directory) on the directories in question. So, several things I'm after. 1) Are there any applications, that take the input of a list of directories and output the same list with the trustee's attached? 2) Is there an easy way to determine the trustee's without looking at Console One? Thanks, Matt.

    Read the article

  • How to test TempDB performance?

    - by Matt Penner
    I'm getting some conflicting advice on how to best configure our SQL storage with our current SAN. I would like to do some of my own performance testing with a few different configurations. I looked at using SQLIOSim but it doesn't seem to simulate TempDB. Can anyone recommend a way to test data, log and TempDB performance? What about using a SQL profiler trace file from our production system? How would I use This to run against my test server? Thanks, Matt

    Read the article

  • Windows 7 boot problem on a Lenovo Thinkpad Z61m 9450HAG

    - by Matt Taylor
    Hello, I recently did a full upgrade of windows 7 on my thinkpad, everything worked fine after up until the second reboot (the first reboot after some updates installed worked OK). At second reboot time the system would just black screen just before the Windows logo appears, disk/wireless/power/battery lights are all lit and the disk light is active (flickering). However, if I remove my battery and boot with just power it boots fine and quickly, and everything is OK. Any help on why this wont boot with battery plugged in is greatly appreciated - i need to take this battery out on the road/trains etc.... Cheers Matt

    Read the article

  • mac osx active directory authentication and linux samba share problems.

    - by Matt Delves
    As a precursor, the network setup is one that includes a combination of Novell Netware servers as well as Windows Servers and Linux servers. I've successfully been able to bind my mac to the Windows Domain and can login without any problems. I've been able to mount shares without needing to resupply login credentials to any windows based share. The problem I've found is that when I'm attempting to mount a share from a linux server, it is asking to resupply the login credentials. Has anyone experienced this kind of problem. The linux servers are a combination of SLES 10 and 11 and RHEL 4 and 5. Thanks, Matt

    Read the article

  • Is it possible to bind a windows key combination to currently open application?

    - by matt
    I use launchy on every box that I have to interact with for more than a few hours a day, and it certainly makes me more efficient, but I want more. I would like to have a key combination that would take a window that I use frequently, and is always open such as mRemote or FAR manager, and bring it to the foreground. I have been alt-tabbing around forever, and it's getting old if there are more than a few windows open. Anyone have any ideas on this? Thanks, matt.

    Read the article

  • Is it possible to bind a windows key combination to currently open application?

    - by matt
    I use launchy on every box that I have to interact with for more than a few hours a day, and it certainly makes me more efficient, but I want more. I would like to have a key combination that would take a window that I use frequently, and is always open such as mRemote or FAR manager, and bring it to the foreground. I have been alt-tabbing around forever, and it's getting old if there are more than a few windows open. Anyone have any ideas on this? Thanks, matt.

    Read the article

  • Real time mirroring between two sql server databases

    - by Matt Thrower
    Hi, I'm a c# programmer, not a DBA and I've had the (mis)fortune to be handed a database admin task. So please bear this in mind when answering this question. What I've been asked to do is to create a real time two-way mirror between two databases with a 10 Megabit connection between them. So when either changes it updates the other. This is not a standard data mirroring/failover task where one DB is the master and the other is a backup - both are live and each needs to instantly reflect changes made to the other. In my head this sounds like a tall order, one which may even be impossible - after all in a rapidly changing environment with lots of users this is going to be massively resource intensive and create locks and queues of jobs all over the place. Is it possible? If so, can anyone either give me some basic instructions and/or point me at some places to start my reading and research? Cheers, Matt

    Read the article

  • How do I delay email delivery using Entourage 2008 with Exchange, e.g. using the X.400 Deferred-Delivery header?

    - by Matt McClure
    I'd like to delay the delivery of email that I send so that I can time delivery when the recipient is unlikely to be reading email and I can reduce the likelihood of getting into a chat-like conversation. I'm using Entourage 2008 and Exchange hosted by Rackspace. I tried naively adding a Deferred-Delivery header after reading http://www.faqs.org/rfcs/rfc2156.html and www.itu.int/rec/T-REC-F.400-199906-I/en , but my mail was delivered immediately. Ideally the delay would occur on the MTA instead of my MUA so that delivery would still occur even if my laptop were disconnected from the network at the delivery time I specify. My best workaround at the moment is to habitually use Entourage's Send Later button when composing mail and then click Send/Receive at the end of the day. This is less than ideal because recipients are often reading mail at the end of my day, and I often get immediate replies. Matt

    Read the article

  • Restricting SSRS subscriptions to shared schedules only

    - by Matt Frear
    Hi all I'm reasonably new to SQL Server Reporting Services and Report Manager, and completely new to SSRS's Subscriptions. We're running SSRS 2008. Out of the box it seems that a user with the Browser role can create a Subscription to a report and schedule it to run at any time they choose. As an admin I have setup a schedule called "Overnight reports" and have it run every night from 1am. I would like it so that when a regular user creates their Subscription they can only use one of my shared schedules so that their subscription will only run overnight. Is this possible? Thanks -Matt

    Read the article

  • CentOS 5.5 x86_64 VPS - A lot of inbound traffic when idle?

    - by Matt Clarke
    I have a CentOS VPS from UKWSD and I'm getting inbound traffic that I cannot understand. The VPS was setup yesterday and I installed vnstat this morning around 10am, since then the server was basically idle and doing nothing from 12pm but it's showing activity inbound which is way over what it should be and i'd say the outbound is pretty much over to top too. Here is vnstat (snapshot taken at 10:30pm GMT) http://i.imgur.com/XnORb.jpg Here is the iptables http://pastebin.com/uGxX2Ucw The reason I'm concerned is.. 1) I have no idea why this is happening, and I like to know what's going on :D 2) I've calculated (briefly) that this pointless traffic would use around 15-20GB of bandwidth per month, and when your on a 150GB limit - it's quite an issue. I'm struggling to understand this and I thought I'd get some advice before asking my ISP (and risk looking completely stupid) Regards Matt

    Read the article

  • How to Mirror or Clone a Spanned Volume in Windows 2008

    - by Matt
    I have a spanned volume (3x6+ TB disks spanned to one 20+ TB volume) that I need to mirror or clone to a new 20+ TB (unspanned) volume. Once mirrored or cloned I'm going to destroy the original volume and reuse the storage elsewhere. Windows 2008 will not allow me to mirror it because the original is a spanned volume. I cannot simply copy the data, because there are sparse files on the volume. So the OS thinks there is 150+ TB used on the disk when there really is only around 18TB used physically. When I try to use the copy command it won't run because it thinks the destination volume needs to be 150+ TB to hold it all. A conundrum, but I figure someone here has the answer. Thanks, Matt

    Read the article

  • Can you specify git-shell in .ssh/authorized_keys to restrict access to only git commands via ssh?

    - by Matt Connolly
    I'd like to be able to use a ssh key for authentication, but still restrict the commands that can be executed over the ssh tunnel. With Subversion, I've achieved this by using a .ssh/authorized_keys file like: command="/usr/local/bin/svnserve -t --tunnel-user matt -r /path/to/repository",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty ssh-rsa AAAAB3NzaC1yc2EAAAABIetc... I've tried this with "/usr/bin/git-shell" in the command, but I just get the funky old fatal: What do you think I am? A shell? error message.

    Read the article

  • OpenVPN client on Amazon EC2

    - by Matt Culbreth
    I have an account with an OpenVPN service, and I'd like to get that running on my EC2 instance running Ubuntu 12.04. I have my config file in /etc/openvpn, and it connects fine when I run sudo openvpn --config matt.ovpn. However, I then lose connectivity to the EC2 machine, and I can't SSH back to it until I reboot. Previously I have done things like sudo ip rule add from IP_ADDRESS table 10 and then sudo ip route add default via GATEWAY_IP table 10, but that's not working on EC2. Any ideas? My private IP address right now is 10.209.29.XXX and my gateway is 10.209.29.1.

    Read the article

  • How to pass an enum to Html.RadioButtonFor to get a list of radio buttons in MVC 2 RC 2, C#

    - by Matt W
    Hi, I'm trying to render a radio button list in MVC 2 RC 2 (C#) using the following line: <%= Html.RadioButtonFor(model => Enum.GetNames(typeof(DataCarry.ProtocolEnum)), null) %> but it's just giving me the following exception at runtime: Templates can be used only with field access, property access, single-dimension array index, or single-parameter custom indexer expressions. Is this possible and if so, how, please? Thanks, Matt.

    Read the article

  • Fluent NHibernate View Mapping requires Id Column

    - by Matt
    Hi Trying to use FNH to map a view - FNH insists on having a Id property mapped. However not all of my views have a unique identifing column. I can get around this with XML mappings as I can just specify a <id type="int"> <generator class="increment"/> </id> at the top of the mapping. Is there any way to duplicate this in FNH...? TIA Matt

    Read the article

  • How to get .NET HttpContext for isolated code block

    - by Matt Thrower
    Hi, In .NET is it possible to get the HttpContext of the current page from within an external class? So, for example in my page test1.aspx codebehind I've got: Dim blah As New FeedWriter() blah.Run() But inside FeedWriter.vb, can I get the HttpContext of test1.aspx? Or would I have to pass it in to Run()? (I'm unwilling to do the latter because FeedWriter implements an interface which will need to be re-written if it's to take arguments) Cheers, Matt

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >