Search Results

Search found 16537 results on 662 pages for 'oracle accelerate for midsize companies'.

Page 623/662 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • Cone of Uncertainty in classic and agile projects

    - by DigiMortal
    David Starr from Scrum.org made interesting session in TechEd Europe 2012 - Implementing Scrum Using Team Foundation Server 2012. One of interesting things for me was how Cone of Uncertainty looks like in agile projects (or how agile methodologies distort the cone we know from waterfall projects). This posting illustrates two cones – one for waterfall and one for agile world. Cone of Uncertainty Cone of Uncertainty was introduced to software development community by Steve McConnell and it visualizes how accurate are our estimates over project timeline. Here is the Cone of Uncertainty when we deal with waterfall and Big Design Up-Front (BDUF). Cone of Uncertainty. Taken from MSDN Library page Estimating. The closer we are to project end the more accurate are our estimates. When project ends we know exactly how much every task took time. As we can see then cone is wide when we usually have to give our estimates – it happens somewhere between Initial Project Concept and Requirements Complete. Don’t ask me why Initial Project Concept is the stage where some companies give their best estimates – they just do it every time and doesn’t learn a thing later. This cone is inevitable for software development and agile methodologies that try to make software world better are also able to change the cone. Cone of Uncertainty in agile projects Agile methodologies usually try to avoid BDUF, waterfalls and other things that make all our mistakes highly expensive. Of course, we are not the only ones who make mistakes – don’t also forget our dear customers. Agile methodologies take development as creational work and focus on making it better. One main trick is to focus on small and short iterations. What it means? We are estimating functionalities that are easier for us to understand and implement. Therefore our estimates are more accurate. As we move from few big iterations to many small iterations we also distort and slice Cone of Uncertainty. This is how cone looks when agile methodologies are used. Cone of Uncertainty in agile projects. We have more cones to live with but they are way smaller. I don’t have any numbers to put here because I found any but still this “chart” should give you the point: more smaller iterations cause more but way smaller cones of uncertainty. We can handle these small uncertainties because steps we take to complete small tasks are more predictable and doesn’t grow very often above our heads. One more note. Consider that both of charts given in this posting describe exactly the same phase of same project – just uncertainties are different.

    Read the article

  • Link instead of Attaching

    - by Daniel Moth
    With email storage not being an issue in many companies (I think I currently have 25GB of storage on my email account, I don’t even think about storage), this encourages bad behaviors such as liberally attaching office documents to emails instead of sharing a link to the document in SharePoint or SkyDrive or some file share etc. Attaching a file admittedly has its usage scenarios too, but it should not be the default. I thought I'd list the reasons why sharing a link can be better than attaching files directly. In no particular order: Better Review. It allows multiple recipients to review the file and their comments are aggregated into a single document. The alternative is everyone having to detach the document, add their comments, then send back to you, and then you have to collate. Wirth the alternative, you also potentially miss out on recipients reading comments from other recipients. Always up to date. The attachment becomes a fork instead of an always up to date document. For example, you send the email on Thursday, I only open it on Tuesday: between those days you could have made updates that now I am missing because you decided to share a link instead of an attachment. Better bookmarking. When I need to find that document you shared, you are forcing me to search through my email (I may not even be running outlook), instead of opening the link which I have bookmarked in my browser or my collection of links in my OneNote or from the recent/pinned links of the office app on my task bar, etc. Can control access. If someone accidentally or naively forwards your link to someone outside your group/org who you’d prefer not to have access to it, the location of the document can be protected with specific access control. Can add more recipients. If someone adds people to the email thread in outlook, your attachment doesn't get re-attached - instead, the person added is left without the attachment unless someone remembers to re-attach it. If it was a link, they are immediately caught up without further actions. Enable Discovery. If you put it on a share, I may be able to discover other cool stuff that lives alongside that document. Save on storage. So this doesn't apply to me given my opening statement, but if in your company you do have such limitations, attaching files eats up storage on all recipients accounts and will also get "lost" when those people archive email (and lose completely at some point if they follow the company retention policy). Like I said, attachments do have their place, but they should be an explicit choice for explicit reasons rather than the default. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Reasons to Use a VM For Development

    - by George Stocker
    Background: I work at a start-up company, where one team uses Virtual Machines to connect to a remote server to do their development, and another team (the team I'm on) uses local IIS/SQL Server 2005/Visual Studio installations to conduct work. Team VM is located about 1000 miles from Team Non-VM, and the servers the VMs run off of are located near Team VM (Latency, for those that are wondering, is about 50ms). A person high in the company is pushing for Team Non-VM to use virtual machines for programming, development, and testing. The latter point we agree on -- we want Virtual Machines to test configurations and various aspects of the web application in a 'clean' state. The Problem: What we don't agree on is having developers using RDP to connect to a desktop remotely that contains Visual Studio, SQL Server, and IIS to do the same development we could do locally on our laptops. I've tried the VM set-up, and besides the color issue, there is a latency issue that is rather noticeable, not to mention that since we're a start-up, a good number of employees work from home on occasion with our work laptops, and this move would cut off the laptops. They'd be turned in. Reasons to Use Remote VMs for Development (Not Testing!): Here are the stated reasons that this person wants us to use VMs: They work for TeamVM. They keep the source code "safe". If we want to work from home, we could just use our home PCs. Licenses (I don't know what the argument is, only that it's been used). Reasons not to use Remote VMs for Development: Here are the stated reasons why we don't want to use VMs: We like working from home. We get a lot done on our own time. We're not going to use our Home PCs to do work related stuff. The Latency is noticeable. Support for the VMs (if they go down, or if we need a new VM) takes a while. We don't have administrative privileges on the VM, and are unable to change settings as needed. What I'm looking for from the community is this: What reasons would you give for not using VMs for development? Keep in mind these are remote VMs -- this isn't a VM running on a local desktop. It's using the laptop (or a desktop) as a thin client for a remote VM. Also, on the other side of the coin: Is there something we're missing that makes VMs more palatable for development? Edit: I think 'safe' is used in term of corporate espionage, or more correctly if the Laptop gets stolen, the person who stole would have access to our source code. The former (as we've pointed out, is always going to be a possibility -- companies stop that with litigation, there isn't a technical solution (so far as I can see)). The latter point is ( though I don't know its usefulness in a corporate scenario) mitigated by Truecrypt'ing the entire volume.

    Read the article

  • How quickly to leave contract-to-hire gig where you don't want to be hired? [closed]

    - by nono
    So you move to a big new city with tons of software development opportunity, having taken a six month contract-to-hire job. The company treats you really well and has a good team and work environment. However, the recruiter assured you when offering the gig that it would be a good position in which you can advance your learning from more senior developers (a primary concern of yours) but you're starting to realize that a job recruiter isn't going to understand that the team in question isn't very up on modern software practices (you start to sympathize with this guy and read his post over and over again: http://stackoverflow.com/questions/1586166/career-killer-nhibernate-oop-design-patterns-domain-driven-design-test-driv) and that much of the company's software is very old and very very poorly architected, and the company (like so many others) seems to be only concerned with continually extending the software without investing in any structural improvements. You're absolutely dismayed at how long it takes your team (including) to fulfill simple feature requests (maybe 500-1000% longer than with better designed software that you've worked on in the past), but no one else there seems to think anything of it. You find that the work and the company's business are intensely uninteresting to you, but due to the convoluted design of their various software systems, fulfilling the work will require as much mental engagement as any other development position. You feel a bit naive about not having asked the right questions during your interview process, and for not having anticipated that your team at your former podunk company might possibly be light-years ahead of any team in Big Shiny City, but you know you don't want to stay at this place, and (were it not for your personal, after-hours studying and personal programming efforts) fear that you might actually give a worse interview after completing your 6 months than you did when you started at the place. You read about how hard of a time local companies are having filling their positions with qualified software development candidates. You read all sorts of fabulous sounding job postings online and feel like you're really missing out. In spite of the comfortable environment you feel like you would willingly accept a somewhat more demanding or aggressive lifestyle to feel like you are learning and progressing and producing something meaningful. My questions are: how quickly do you leave and how do you go about giving a polite reason for departing? The contract is written to allow them to "can" you and to allow you to leave with 2 weeks notice. Do you ethically owe the 6 months? Upon taking the position, the company told you they were not interested in candidates who were intending to only stay for 6 months and then leave (you were not intending to bail after 6 months, at that time), so perhaps they might be fine if you split now, knowing that you don't want to stick around for the full time hire?

    Read the article

  • Can you work for the big (Google, Microsoft, Facebook etc.) without getting too much involved?

    - by Developer Art
    Having seen people talking about interviewing and working for the big companies, I keep wondering how much are you expected to actually get involved in there. 1) That's because I keep seeing folks from Google and Microsoft and others writing in forums, blogging, tweeting, speaking at conferences and seemingly doing this on the 24/7/365 basis from their office, apartment, hotel and even plane. Are you really expected to commit that much if you come to work for them? Do they want you to think about your work while you're eating, sleeping, taking a shower, making love and so on? Can you in fact "switch off" at five and go home forgetting everything? Perhaps you have a hobby, family life, kids, friends, personal projects anyone? Is it so that if you work for the big then you're expected not to have any life outside of the company? You can't develop own projects, have own clients and just have another life? 2) One other thing is the work contracts the big use. I've heard for instance that when you join Microsoft you need to provide a list of projects you're currently working on and after that anything new you'll come up with during your employment automatically belongs to the company. Are all of the big doing this? Can you deny signing a contract until such clause is removed or with the big it is "take it or leave it" because the legal department won't accept any change? Can you make them write the contract in that manner that they step away from anything you've developed in your private time? Of all the big I have only been at SAP during my internship. Lately while browsing through the old papers I've found my old contact which stipulated they owned everything I developed or invented during my employment, which I would never have signed these days. On a side note I don't think I would return to SAP since I remember most people there were clueless and provided the impression they were simply sitting out their years waiting for the retirement. But anyway, what do the other big put in their contracts? How far do you get involved when you go working for the big? Or perhaps fully committed with your body and soul? P.S. I'm not planning to join any of them I'm just curious.

    Read the article

  • How can we unify business goals and technical goals?

    - by BAM
    Some background I work at a small startup: 4 devs, 1 designer, and 2 non-technical co-founders, one who provides funding, and the other who handles day-to-day management and sales. Our company produces mobile apps for target industries, and we've gotten a lot of lucky breaks lately. The outlook is good, and we're confident we can make this thing work. One reason is our product development team. Everyone on the team is passionate, driven, and has a great sense of what makes an awesome product. As a result, we've built some beautiful applications that we're all proud of. The other reason is the co-founders. Both have a brilliant business sense (one actually founded a multi-million dollar company already), and they have close ties in many of the industries we're trying to penetrate. Consequently, they've brought in some great business and continue to keep jobs in the pipeline. The problem The problem we can't seem to shake is how to bring these two awesome advantages together. On the business side, there is a huge pressure to deliver as fast as possible as much as possible, whereas on the development side there is pressure to take your time, come up with the right solution, and pay attention to all the details. Lately these two sides have been butting heads a lot. Developers are demanding quality while managers are demanding quantity. How can we handle this? Both sides are correct. We can't survive as a company if we build terrible applications, but we also can't survive if we don't sell enough. So how should we go about making compromises? Things we've done with little or no success: Work more (well, it did result in better quality and faster delivery, but the dev team has never been more stressed out before) Charge more (as a startup, we don't yet have the credibility to justify higher prices, so no one is willing to pay) Extend deadlines (if we charge the same, but take longer, we'll end up losing money) Things we've done with some success: Sacrifice pay to cut costs (everyone, from devs to management, is paid less than they could be making elsewhere. In return, however, we all have creative input and more flexibility and freedom, a typical startup trade off) Standardize project management (we recently started adhering to agile/scrum principles so we can base deadlines on actual velocity, not just arbitrary guesses) Hire more people (we used to have 2 developers and no designers, which really limited our bandwidth. However, as a startup we can only afford to hire a few extra people.) Is there anything we're missing or doing wrong? How is this handled at successful companies? Thanks in advance for any feedback :)

    Read the article

  • List of common pages to have in the footer [closed]

    - by user359650
    I would like to post this question as a reference for webmasters wondering what pages they should include in the footer. I will use answers to complete my initial list: About us / About MyCompany / MyCompany About / About us: description about the company, its mission, and its vision. History: summary of milestones achieved by the company. The team / Management / Board of directors: depending on size of the company there may be one of more pages describing the people involved in the company, depending on their position. Awards: list of awards received by the company if any. In the press / They're talking about us: list of links to external websites, usually highly regarded news websites, which mentioned the company in one of their articles. Media Wallpapers: wallpapers with company logo in different colors and formats that fans can set as desktop image for their computer. logos: company logo in different colors and formats that websites/blogs posting about the website can use for illustration purposes. Media kits: documents, usually in PDF format summarizing the key company figures and facts that journalists can download and read to get a quick overview of the company. Misc Contact / Contact us: contact details the company is prepared to disclose if any (address, email, phone) or contact form. Careers / Jobs / Join us: list of open vacancies with contact form to apply. Investors / Partners / Publishers: information and contact forms for companies willing to become Investors/Partners/Publishers or login page to access portal restricted to those who already are. FAQ: list of common questions and answers to guide users and reduce number of support requests. Follow us / Community Facebook / Twitter / Google+: links to the company's pages/accounts on various social networks. Legal Terms / Terms of use / Terms & Conditions: rules users must follow when browsing the website. Privacy / Privacy Statement: explanations as to how the company deals with users' personal data and what users can do about it (request information to be deleted...). cookies: page that starts appearing on more and more websites due to new regulation (notably EU) imposing more transparency and control for users about cookies (e.g. BBC cookie page). Any input is welcome PS: if someone with enough rep could add the footer tag that would be great (min. 300 required).

    Read the article

  • I'm graduating with a Computer Science degree but I don't feel like I know how to program.

    - by wp123
    I'm graduating with a Computer Science degree but I see websites like Stack Overflow and search engines like Google and don't know where I'd even begin to write something like that. During one summer I did have the opportunity to work as a iPhone developer, but I felt like I was mostly gluing together libraries that other people had written with little understanding of the mechanics happening beneath the hood. I'm trying to improve my knowledge by studying algorithms, but it is a long and painful process. I find algorithms difficult and at the rate I am learning a decade will have passed before I will master the material in the book. Given my current situation, I've spent a month looking for work but my skills (C, Python, Objective-C) are relatively shallow and are not so desirable in the local market, where C#, Java, and web development are much higher in demand. That is not to say that C and Python opportunities do not exist but they tend to demand 3+ years of experience I do not have. My GPA is OK (3.0) but it's not high enough to apply to the large companies like IBM or return for graduate studies. Basically I'm graduating with a Computer Science degree but I don't feel like I've learned how to program. I thought that joining a company and programming full-time would give me a chance to develop my skills and learn from those more experienced than myself, but I'm struggling to find work and am starting to get really frustrated. I am going to cast my net wider and look beyond the city I've grown up in, but what have other people in similar situation tried to do? I've worked hard but don't have the confidence to go out on my own and write my own app. (That is, become an indie developer in the iPhone app market.) If nothing turns up I will need to consider upgrading and learning more popular skills or try something marginally related like IT, but given all the effort I've put in that feels like copping out. EDIT: Thank you for all the advice. I think I was premature because of unrealistic expectations but the comments have given me a dose of reality. I will persevere and continue to code. I have a project in mind, although well beyond my current capabilities it will challenge me to hone my craft and prove my worth to myself (and potential employers). Had I known there was a career overflow I would have posted there instead. Thanks again!

    Read the article

  • Testing and Validation – You Really Do Have The Time

    - by BuckWoody
    One of the great advantages in my role as a Technical Specialist here at Microsoft is that I get to work with so many great clients. I get to see their environments and how they use them, and the way they work with SQL Server. I’ve been a data professional myself for many years. Over that time I’ve worked with many database platforms, lots of client applications, and written a lot of code in many industries. For a while I was also a consultant, so I got to see how other shops did things as well. But because I now focus on a “set” base of clients (over 500 professionals in over 150 companies) I get to see them over a longer period of time. Many of them help me understand how they use the product in their projects, and I even attend some DBA regular meetings. I see the way the product succeeds, and I see when it fails. Something that has really impacted my way of thinking is the level of importance any given shop is able to place on testing and validation. I’ve always been a big proponent of setting up a test system and following a very disciplined regimen to make sure it will work in production for any new projects, and then taking the lessons learned into production as standards. I know, I know – there’s never enough time to do things right like this. Yet the shops I see that do it have the same level of work that they output as the shops that don’t. They just make the time to do the testing and validation and create a standard that they will follow in production. And what I’ve found (surprise surprise) is that they have fewer production problems. OK, that might seem obvious – but I’ve actually tracked it and those places that do the testing and best practices really do save stress, time and trouble from that effort. We all think that’s a good idea, but we just “don’t have time”. OK – but from what I’m seeing, you can gain time if you spend a little up front. You may find that you’re actually already spending the same amount of time that you would spend in doing the testing, you’re just doing it later, at night, under the gun. Food for thought.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Which one to select for my future career; Java, C#, Azure or Apex?

    - by user636195
    Hi folks, This time, I am going to studying Masters in Computer Science in U.S.A after a week. I have been doing my B.Sc for the past three years and after my freshman year I started working on projects (in C# and very rarely in Java) for the past two years.(i.e while I was a second and third year student). Now I am in a college where all of the programming courses are going to be taken in Java only (using Eclipse) and I am going to stay in this college for 8 months on campus and then fully employed for two years in other companies as a CPT. I really love to work on Microsoft products because, for me, they are simple and easy to use and understand. My future plan is to work in Cloud computing and be a Cloud based business owner in the near future. Since the college is going to teach us and let us do every project in Java, I was confused which programming language to use that will help me and enhance me in my career, and of course I wanted to select the one I liked to do everytime. I also heard a lot about Azure (Microsoft’s ) and Apex (Salesforce.com’s cloud computing programming language). Would you please give me your advice and recommendation based on my situation? Should I have to study only Java, or should I have to study C# or Azure beside Java on my own? The reason I asked this is because, since I have no clue how Azure works and how long it will take me to know the language, I am really confused which one to select (Java Vs C# and Azure Vs Apex or if there is any popular and mostly used Cloud Computing langauge). Do you think I can get a job in cloud computing if I study Azure or Apex by my own without experience? There is also one issue I want to consider which is a short term issue is. i.e Salary. Since I have to pay my student loan, I also need to get a good job which will let me pay my loan within two years. But, as I said, my long term plan is, get experience in Cloud Computing (from programming to administrative part,i.e every area of cloud computing) and then have my own business may be within 5-10 years. What do you think? Thank you for your time.

    Read the article

  • Cepstral Analysis for pitch detection

    - by Ohmu
    Hi! I'm looking to extract pitches from a sound signal. Someone on IRC just explain to me how taking a double FFT achieves this. Specifically: take FFT take log of square of absolute value (can be done with lookup table) take another FFT take absolute value I am attempting this using vDSP I can't understand how I didn't come across this technique earlier. I did a lot of hunting and asking questions; several weeks worth. More to the point, I can't understand why I didn't think of it. I am attempting to achieve this with vDSP library. it looks as though it has functions to handle all of these tasks. However, I'm wondering about the accuracy of the final result. I have previously used a technique which scours the frequency bins of a single FFT for local maxima. when it encounters one, it uses a cunning technique (the change in phase since the last FFT) to more accurately place the actual peak within the bin. I am worried that this precision will be lost with this technique I'm presenting here. I guess the technique could be used after the second FFT to get the fundamental accurately. But it kind of looks like the information is lost in step 2. as this is a potentially tricky process, could someone with some experience just look over what I'm doing and check it for sanity? also, I've heard there is an alternative technique involving fitting a quadratic over neighbouring bins. Is this of comparable accuracy? if so, I would favour it, as it doesn't involve remembering bin phases. so questions: does this approach makes sense? Can it be improved? I'm a bit worried about And the log square component; there seems to be a vDSP function to do exactly that: vDSP_vdbcon however, there is no indication it precalculates a log-table -- I assume it doesn't, as the FFT function requires an explicit pre-calculation function to be called and passed into it. and this function doesn't. Is there some danger of harmonics being picked up? is there any cunning way of making vDSP pull out the maxima, biggest first? Can anyone point me towards some research or literature on this technique? the main question: is it accurate enough? Can the accuracy be improved? I have just been told by an expert that the accuracy IS INDEED not sufficient. Is this the end of the line? Pi PS I get SO annoyed (npi) when I want to create tags, but cannot. :| I have suggested to the maintainers that SO keep track of attempted tags, but I'm sure I was ignored. we need tags for vDSP, accelerate framework, cepstral analysis

    Read the article

  • Openconnect for Cisco VPN doesn't recognize private key file - asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag

    - by Alexander Skwar
    I'm trying to use my Synology DS212 NAS box also act as VPN gateway to my companies VPN. Sadly, they only use Cisco ASA and to complicate stuff even further, we've got to use personal certificates (which is of course more secure, but more complicate to get going…). So I compiled OpenConnect v4.06 from http://www.infradead.org/openconnect/. As a very basic test, I tried to build a connection by manually invoking openconnect, passing along the key and cert files, like so: /lib/ld-linux.so.3 --library-path /opt/lib \ /opt/openconnect/sbin/openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 It fails :( Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Using client certificate '/[email protected]/OU=Company VPN' 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Loading private key failed (see above errors) Loading certificate failed. Aborting. Failed to open HTTPS connection to <ip> Failed to obtain WebVPN cookie When I run the same command with the same cert/key files on a Ubuntu 12.04 box, it works: openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Extra cert from cafile: '/CN=Company AG VPN CA/O=Company AG/L=Zurich/ST=ZH/C=CH' SSL negotiation with <ip> Server certificate verify failed: self signed certificate Certificate from VPN server "<ip>" failed verification. Reason: self signed certificate Enter 'yes' to accept, 'no' to abort; anything else to view: yes Connected to HTTPS on <ip> GET https://<ip>/ […] Well… The error on the NAS is this: 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Any ideas, what's causing this? On Syno, I use OpenConnect 4.06. On Ubuntu, I just compiled and installed to a custom location OpenConnect 4.06 as well. Thanks, Alexander

    Read the article

  • Help diagnosing Likewise Open Active Directory authentication problem

    - by purpletonic
    I have two servers which were up until recently authenticating against the companies Active Directory Domain controller. I believe a recent change to the Active Directory administrator password caused the servers to stop authenticating against AD. I tried to add the servers back to the domain using the command: domainjoin-cli join example.com adusername this seemed to work without complaints, but when I try to login via ssh with my domain account, I get an invalid password error. When I run the command: lw-enum-users it prints all of the domain users, and looking up my own account, I see that it is valid and my password hasn't expired. I also ran lw-get-status and received the following: LSA Server Status: Agent version: 5.0.0 Uptime: 0 days 3 hours 35 minutes 46 seconds [Authentication provider: lsa-activedirectory-provider] Status: Online Mode: Un-provisioned Domain: example.com Forest: example.com Site: Default-First-Site-Name Online check interval: 300 seconds \[Trusted Domains: 1\] \[Domain: EXAMPLE\] DNS Domain: example.com Netbios name: EXAMPLE Forest name: example.com Trustee DNS name: Client site name: Default-First-Site-Name Domain SID: S-1-5-24-1081533780-4562211299-822531512 Domain GUID: 057f0239-7715-4711-e64b-eb5eeed20e65 Trust Flags: \[0x001d\] \[0x0001 - In forest\] \[0x0004 - Tree root\] \[0x0008 - Primary\] \[0x0010 - Native\] Trust type: Up Level Trust Attributes: \[0x0000\] Trust Direction: Primary Domain Trust Mode: In my forest Trust (MFT) Domain flags: \[0x0001\] \[0x0001 - Primary\] \[Domain Controller (DC) Information\] DC Name: dc1.example.com DC Address: 10.11.0.103 DC Site: Default-First-Site-Name DC Flags: \[0x000003fd\] DC Is PDC: yes DC is time server: yes DC has writeable DS: yes DC is Global Catalog: yes DC is running KDC: yes [Authentication provider: lsa-local-provider] Status: Online Mode: Local system Anyone got any ideas what might be occurring? Thanks in advance!

    Read the article

  • PXE bootable image for terminal server?

    - by HeavenCore
    We have 300 windows xp machines on cruddy old hardware across the company. With extended support for XP ending April next year we're looking into our options. Couple of options: Replace the 300 PC's with full windows 7 PC's (£100k +?) - no use of terminal server (our current model) Replace the 300 PC's with off the shelf thin clients & make use of our terminal server - Cheaper clients but Terminal Server CALS required? Keep the 300 PC's, replace windows XP with linux thin client capable of connecting to our terminal server - no hardware costs, just Terminal Server CALS required? Keep the 300 PC's - remove hard drives and make use of a PXE bootable "thin client" to connect to our terminal server If we were to choose option 4, what our the options out there? Is there any official PXE bootable thin clients for terminal server out there? If so, what are the licence requirements? Is there options we haven’t considered? There must be lots of companies out there in this situation - curious what the current trend is for this problem? Edit: Option 5 - Create a bootable Windows PE image with RDP auto start and use that as a "thin client" for our terminal server - is Windows PE licence free in such a model?

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths? EDIT Just to clarify, our concern is not to do so much with server performance or bandwith. We could obviously employ an external CDN server but we have plenty CPU and bandwith. Our concern is with how to have "old" sites with plenty semi-static HTML content benefiting from splitting connections for images and static content via apache without having to change the html to absolute paths (ie. image.jpg to cdn.main.com/image.jpg happening on the server not the code)

    Read the article

  • Changing Domain Name DNS to Redirect web traffic to one server, and leave mail to original server

    - by David S
    Hi there, Ok, quite the idiot with DNS.. apart from the basics. I have a domain name hosted with a domain registrar. It seems to have full DNS control (i.e. ability to view/edit A Records, Mail etc..) We have recently setup a server at Rackspace which hosts the new website The original/existing server (where the old website still is and Mail) is on another shared hosting companies server I went to the domain name registrar, and checked out the DNS management as follows: click here to view the DNS screenshot So obviously the A Record is pointing to the actual server where the website/mail is I figure, and the CNAME is pointing (alias?) to the website url. So my question is this: If I want the web traffic portion to go to the Rackspace/new server, but keep the mail going to where it is now, what do I have to change? Also, should I even change this info at the domain registrar? the rackspace server account has full DNS which seems to suggest I can point to their nameservers and then re-direct the MX (Mail) traffic to where the mail server is? Sorry if that was a bit confusing.. obviously in need of DNS training ;) Any help very appreciated. David.

    Read the article

  • Domain registrar transfering

    - by Mike Weerasinghe
    In 2004 I registered a domain name when I opened an account with DiscountASP.NET. I presume my domain registration was handled by a reseller. A domain tools who is search shows that registration services are provided by Znode LLC. I changed hosting companies and need to change DNS servers to point to my new hosting company but I have no idea how to do that. There is no control panel I can access. Ideally I would like to transfer registrar's. I emailed Znode support but I have not received any response. I called and left a message and they have not called back. My new hosting company wants an EPP authorization code in order to transfer my domain. I guess I need to get it from Znode LLC. Anyone have any ideas on how I might go about transferring my domain over to a new registrar? The domain name has not expired and is currently active. Thanks in advance for your help.

    Read the article

  • Looking for software to create a personal hotspot on Mac OS X (not using the built-in ability)

    - by Fred
    I got my nifty little MacBook Air today. Since I will be going on travel I was hoping to use my iPad 3G as screen extension with Air Display. It works being in a shared WiFi network. Sadly Apple is a bit restrictive on the tethering with their mobile devices such as the iPad. Now I read iOS 4.3 will enable the iPad to use tethered networks. But the feature will not be working for iPad 3G. I assume their big partner telecom companies fear everyone will quit their internet contract with them. So I want to create a personal hotspot on my MacBook and use it on my iPad. But the iPad is not able to use this network. The network is visible but not usable. On Windows there is this connectify program which lets the PC act as wireless router. Does anybody know a similar program for Mac which would turn it into a wireless router? Or is there something I don't see with Mac OS X's builtin features?

    Read the article

  • CodePlex Daily Summary for Monday, January 10, 2011

    CodePlex Daily Summary for Monday, January 10, 2011Popular ReleasesSense/Net Enterprise Portal & ECMS: SenseNet 6.0.1 Community Edition for .NET 4: SenseNet 6.0.1 Community Edition for .NET 4 with SQL CE 4.0 This half year we have been working quite fiercely to bring you the long-awaited release of Sense/Net 6.0. Download this Community Edition for .NET 4 Platform to see what we have been up to. These months we have worked on getting the WebCMS capabilities of Sense/Net 6.0 up to par. New features include: New, powerful page and portlet editing experience. HTML and CSS cleanup, new, powerful site skinning system. Upgraded, light...Agile Personal Body Of Knowledge: ????-????,???? v0.2.pdf: ????【????-????,????.pdf】???,?????????????????????????????,???????????,???????。 ??????????,??????????,????????,?????????! ????sina??:http://q.t.sina.com.cn/135484VSSpeedster - Parallel Builds for VS: VSSpeedster 1.1: - Parallel Builds with MSBuild integrated in Visual StudioBernie's Trackviewer: Bernie's Trackviewer Version 1.2: Redesigned user interface of main form Also displays waypoints which are not part of a track Can convert a route int a track Maximum age of cached maps can be setPeople's Note: People's Note 0.21: Replaced note viewer buttons with a menu bar to improve scrolling performance. Fixed database relocation on low-resolution devices; thanks to compaNet for reporting. Improved signin error messages. To install: copy the appropriate CAB file onto your WM device and run it.mytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.51.0 beta2: WEB.mytrip.mvc 1.0.51.0 Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) SRC.mytrip.mvc 1.0.51.0 System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.5, MVC3 RC WARNING For run and debug SRC.mytrip.mvc 1.0.51.0 download and install MVC3 RC...EnhSim: EnhSim 2.3.0: 2.3.0This release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Changed how flame shoc...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlHawkeye - The .Net Runtime Object Editor: Hawkeye 1.2.4: [EDIT: 2010/01/10] In the case you are running an x86 Windows; please wait until Release 1.2.5 is made available: Hawkeye is broken on these OS. This is a maintenance release providing bug fixes. It comes in two flavors: Hawkeye.124.N2 is the standard .NET 2 build, was compiled with Visual Studio 2005 and can only inspect .NET 2 applications. Hawkeye.124.N4 is a .NET4 2 build, was compiled with Visual Studio 2010 and can only inspect .NET 4 applications. Please be patient until Release 1.3...Extended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...sNPCedit: sNPCedit v0.9d: added elementclient coordinate catcher to catch coordinates select a target (ingame) i.e. your char, npc or monster than click the button and coordinates+direction will be transfered to the selected row in the table corrected labels from Rot to Direction (because it is a vector)Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.StyleCop for ReSharper: StyleCop for ReSharper 5.1.14980.000: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the correct path ...VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance..NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...EFMVC - ASP.NET MVC 3 and EF Code First: EFMVC 0.5- ASP.NET MVC 3 and EF Code First: Demo web app ASP.NET MVC 3, Razor and EF Code FirstVidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch....NET Voice Recorder: Auto-Tune Release: This is the source code and binaries to accompany the article on the Coding 4 Fun website. It is the Auto Tuner release of the .NET Voice Recorder application.BloodSim: BloodSim - 1.3.2.0: - Simulation Log is now automatically disabled and hidden when running 10 or more iterations - Hit and Expertise are now entered by Rating, and include option for a Racial Expertise bonus - Added option for boss to use a periodic magic ability (Dragon Breath) - Added option for boss to periodically Enrage, gaining a Damage/Attack Speed buffJson.NET: Json.NET 4.0 Release 1: New feature - Added Windows Phone 7 project New feature - Added dynamic support to LINQ to JSON New feature - Added dynamic support to serializer New feature - Added INotifyCollectionChanged to JContainer in .NET 4 build New feature - Added ReadAsDateTimeOffset to JsonReader New feature - Added ReadAsDecimal to JsonReader New feature - Added covariance to IJEnumerable type parameter New feature - Added XmlSerializer style Specified property support New feature - Added ...New ProjectsAssimpXna: AssimpXna is a custom model importer for Xna 4.0 using the Open Asset Import Library (Assimp).ATCSim: This is an atc sim for a school projectAzure Role-Based Deployment: Azure Role-Based Deployment demonstrates how to use the CreateDeployment Windows Azure Service Management API to deploy an app from within a web role. This code can easily be ported to a worker role and thus included in the managment pack for a hosted service.CodeKata AltNet Hispano: Ejemplos de Code Kata usados por la comunidad AltNet Hispano.DataStoreCleaner: DataStoreCleaner clears "DataStore" folder which manages Windows Update History. It is useful for fixing WU error, or tune up Windows start-up. It's developed in C#.DS_HW2: dshw2EFT Calculator: EFT Calculator is an application that performs common cryptographic operations used in electronic funds transfer applications.Entity Visualizers: This project has debugger visualizers for several objects in the Entity Framework: EntityObject, EntityCollection, ObjectQuery and ObjectContext. Some of the source code is based on code from Julie Lerman's book "Programming Entity Framework".EzyCMS - Easy and Simple CMS made by ASP.Net MVC: EzyCMS makes both of end user and developer enjoy CMS benefit and extendability to perform requirements. The design principles: EASY TO USE, EASY TO EXTEND, FLEXIBLE AND PWOERFUL TECHNOLOGY: ASP.Net MVC2, NHibernate, StructureMap, JQueryFlatStore: Simple library to simplify storage of application data when a bulky dedicated database is cumbersome and unnecessaryGigantornis: Gigantornis is a tool for benchmarking your Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current server installation performs. This especially shows you how many requests per second your server installation is capable of serving.Gonte.Dal: Data access layer for NETLezatrus: Lezatrus is the open source project to help people find places to eat in Jakarta. It's developed in ASP.NET MVC using Razor and C#. It's the sample app for Pro ASP.NET MVC Coding Ninja facebook group.Moo: Moo is an object-to-object multi-mapper. It is able to use multiple different strategies (in a mix of convention, configuration, attributes and fluent calls) when mapping from one object to another.MvcXaml: A custom View Engine for ASP.NET MVC that allows Controller Action Methods to return dynamically generated images based on XAML markup.Perfect World Bot Development FrameWork: <empty yet>Silverlight motion detection: Motion detection using Silverlight 4 camera support and a simple motion detection algorithm.Small IT Business Manager: Small IT Business Manager is a tool being created keeping small-midsize IT companies in mind to allow them manage their day to day chores. Management Features planned: * Workers * Timesheets * Financial * HR * Basic Project Management * Invoicingsomething for testing: mot do an mau de test cac van de lien quan toi codeStructure Copier: This small program is supposed to copy tree structure of directory.TogNet: A small utility program to toggle between windows network adapters. Needed a program like this to switch between an external Wireless network and the corporate Lan network adapter.

    Read the article

  • A possible case of hacked email account. What kind of an attack is this?

    - by Rickesh John
    I own a Yahoo mail account. I am using this account for sending resumes and receive notifications from various job portals. But yesterday, I found that some 10-15 mails had been sent to random addresses from my account. Most of them had this format: hr@<companyname>.com I am pretty sure that I didn't send any mails to such addresses. Initially, I thought the job portals may be sending mails on my behalf and Yahoo is logging them, but then I saw the contents. The contents of all those mails were a URL, which I did not click. SCARED. Also, to top it off, my "Sending Name" has been changed to 'Nice Maria'!! o_0 I have taken the necessary measures and changed my password and the secret question. I cannot delete this account as this email is registered with all the job portals and other companies. Is this a simple case of my account being compromised or was I a victim of some web vulnerability? All the mails seem to be bot generated, with only a URL as the message body. Please advice.

    Read the article

  • Can't resolve offline file conflicts

    - by Bryan
    We use roaming profiles on our Server 2008 R2 domain, with folder redirection for 'desktop', 'my documents' and 'application data'. But as our network is split across two sites, we have one file server at each site, which are configured to use domain based DFS namespaces and DFS replication to keep things in sync. The DFS path for the replication folder is as follows: \\domain\folderredirection$\<username>\<redirected-folder-name> The real paths are \\site-1-server\folderredirection$\<username>\<redirected-folder-name> and \\site-2-server\folderredirection$\<username>\<redirected-folder-name> As our users all switch between sites (sometimes several time per day), our folder redirection policy has to redirect to the DFS roots rather than hardcoded to a specific server. Both DFS and DFS-R have been proven to be working perfectly. On our laptops, we use offline files for the redirected folders, and this also works fine, however the problem is as follows: When conflicts occur in offline files, it is impossible to resolve the conflicts. I'm given the usual conflict resolution options (i.e. 'Ignore', 'Keep Both', 'Keep network' and 'Keep local'), however, not one of these options will resolve any conflict, yet no error is produced. We only use offline files on laptops, which have either Windows XP Professional or Windows 7 Professional installed. The problem is not specific to any one laptop, it affects every laptop and every conflicting file in exactly the same way. I would have thought the set up we have is common for companies that have multiple sites, so I'm hoping someone will have seen this before?

    Read the article

  • Postfix Whitelist before recipient restrictions

    - by GruffTech
    Alright. Some background. We have an anti-spam cluster trucking about 2-3 million emails per day, blocking somewhere in the range of 99% of spam email from our end users. The underlying SMTP server is Postfix 2.2.10. The "Frontline defense" before mail gets carted off to SpamAssassin/ClamAV/ ect ect, is attached below. ...basic config.... smtpd_recipient_restrictions = reject_unauth_destination, reject_rbl_client b.barracudacentral.org, reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.mailspike.net, check_policy_service unix:postgrey/socket ...more basic config.... As you can see, standard RBL services from various companies, as well as a Postgrey service. The problem is, I have one client (out of thousands) who is very upset that we blocked an important email of theirs. It was sent through a russian freemailer who was currently blocked in two of our three RBL servers. I explained the situation to them, however they are insisting we do not block any of their emails. So i need a method of whitelisting ANY email that comes to domain.com, however i need it to take place before any of the recipient restrictions, they want no RBL or postgrey blocking at all. I've done a bit of research myself, http://www.howtoforge.com/how-to-whitelist-hosts-ip-addresses-in-postfix seemed to be a good guide at first, almost fixing my problem, But i want it to accept based on TO address, not originating server.

    Read the article

  • Public-to-Public IPSec tunnel: NAT confusion

    - by WuckaChucka
    I know this is possible -- and apparently fairly common with larger companies that don't/can't route private addresses for overlap reasons -- but I can't wrap my head around how to get this to work. I'm playing around with pfSense, Vyatta and a Cisco 5505 right now, hardware-wise. So here's my setup: WEST: Vyatta outside: 10.0.0.254/24 inside: 172.16.0.1/24 machine a: 172.16.0.200/24 EAST: Cisco 5505 outside: 10.0.0.210/24 inside: 192.168.10.1 machine b (webserver): 192.168.10.2 So what we're trying to do is this: route traffic across the tunnel from machine A to machine B without using private addresses. i.e. 172.16.0.200 makes a TCP request to 10.0.0.210:80, and as far as EAST is concerned, it sees a src IP of 10.0.0.254. On WEST, I have your typical many-to-one Source NAT to translate 172.16.0.0/24 to 10.0.0.254 and that's confirmed to be working. Also on WEST, I have the following IPSec config: Local IP: 10.0.0.254 Peer IP: 10.0.0.210 local subnet: 10.0.0.254/32 remote subnet: 10.0.0.210/32 I have the reversed configuration on EAST. What happens when I make a request from machine A to 10.0.0.210:80 is that the SNAT translates the private address of machine A to 10.0.0.254 and it's routed out (and discarded at the other end) without establishing the tunnel. What I'm assuming is happening is that the inside interface on WEST receives a packet from 172.16.0.200 and since this doesn't match the local subnet defined in the tunnel configuration, it's not processed by the IPSec engine and the tunnel is not established. How do you make this work? Seems like a chicken and egg thing with the NAT and IPSec and I just can't wrap my head around how this can be done: can I say, "if a packet is received on the inside interface with a destination of 10.0.0.210, translate it to 10.0.0.254 before the IPSec engine inspects it"?

    Read the article

  • How use DNS server to create simple HA (High availability) of my website?

    - by marc22
    Welcome, How can i use DNS server to create simple HA (High availability) of website ? For example if my web-server ( for better understanding i use internal IP in real it will be other hosting companies) 192.168.0.120 :80 (is offline) traffic go to 192.168.0.130 :80 You have right, i use bad word "hight avability" of course i was thinking about failover. Using few IP in A records is good for simple load-balancing. But not in case, if i want notice user about failure (for example display page, Oops something is wrong without our server, we working on it) against "can't establish connection". I was thinking about setting up something like this 2 DNS servers, one installed on www server Both have low TTL on my domain, set up 2 ns records first for DNS with my apache server second to other dns If user try connect he will get ip of www server using first dns, if that dns is offline (probably www server is also down) so it will try second NS record, what will point to another dns, that dns will point to "backup" page. That's what i would like to do. If You have other idea please share. Reverse proxy is not option, because IP of server can change, or i can use other country for backup.

    Read the article

  • best-practices to block social sites

    - by adopilot
    In our company we have around 100 workstation with internet access, And day by day situation getting more worst and worst from perspective of using internet access for the purpose of doing private jobs, and wasting time on social sites. Open hearted I am not for blocking sites like Facebook, Youtube, and others similar but day by day my colleagues do not finishing his tasks and while I looking at their monitor all time they are ruining IE or Mozilla and chat and things like that. In other way Ill like to block youtube sometime when We have very poor internet access speed, Here is my questions: Do other companies blocking social sites ? Do I need dedicated device for that like hardware firewall, super expensive router Or I can do that whit my existing FreeBSD 6.1 self made router with two lan cards and configured nat to act like router. I was trying do that using ipfw and routerfirewall but without success, My code looks like ipfw add 25 deny tcp from 192.168.0.0/20 to www.facebook.com ipfw add 25 deny udp from 192.168.0.0/20 to www.facebook. ipfw add 25 deny tcp from 192.168.0.0/20 to www.dernek. ipfw add 25 deny udp from 192.168.0.0/20 to www.dernek. ipfw add 25 deny tcp from 192.168.0.0/20 to www.youtube. ipfw add 25 deny udp from 192.168.0.0/20 to www.youtube.com

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >