Search Results

Search found 9124 results on 365 pages for 'big sal'.

Page 33/365 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • i cant ping to my DMZ zone from the local inside PC

    - by Big Denzel
    HI everybody. Can anyone please help me on the following issue. I got a Cisco Asa 5520 configured at my network. I cant ping to my DMZ interface from a local inside network PC. so the only way a ping the DMZ is right from the Cisco ASA firewall, there i can pint to all 3 interfaces, Inside, Outside and DMZ,,,, But no PC from the Inside Network can access the DMZ. Can please any one help? I thank you all in advance Bellow is my Cisco ASA 5520 Firewall show run; ASA-FW# sh run : Saved : ASA Version 7.0(8) ! hostname ASA-FW enable password encrypted passwd encrypted names dns-guard ! interface GigabitEthernet0/0 description "Link-To-GW-Router" nameif outside security-level 0 ip address 41.223.156.109 255.255.255.248 ! interface GigabitEthernet0/1 description "Link-To-Local-LAN" nameif inside security-level 100 ip address 10.1.4.1 255.255.252.0 ! interface GigabitEthernet0/2 description "Link-To-DMZ" nameif dmz security-level 50 ip address 172.16.16.1 255.255.255.0 ! interface GigabitEthernet0/3 shutdown no nameif no security-level no ip address ! interface Management0/0 description "Local-Management-Interface" no nameif no security-level ip address 192.168.192.1 255.255.255.0 ! ftp mode passive access-list OUT-TO-DMZ extended permit tcp any host 41.223.156.107 eq smtp access-list OUT-TO-DMZ extended permit tcp any host 41.223.156.106 eq www access-list OUT-TO-DMZ extended permit icmp any any log access-list OUT-TO-DMZ extended deny ip any any access-list inside extended permit tcp any any eq pop3 access-list inside extended permit tcp any any eq smtp access-list inside extended permit tcp any any eq ssh access-list inside extended permit tcp any any eq telnet access-list inside extended permit tcp any any eq https access-list inside extended permit udp any any eq domain access-list inside extended permit tcp any any eq domain access-list inside extended permit tcp any any eq www access-list inside extended permit ip any any access-list inside extended permit icmp any any access-list dmz extended permit ip any any access-list dmz extended permit icmp any any access-list cap extended permit ip 10.1.4.0 255.255.252.0 172.16.16.0 255.255.25 5.0 access-list cap extended permit ip 172.16.16.0 255.255.255.0 10.1.4.0 255.255.25 2.0 no pager logging enable logging buffer-size 5000 logging monitor warnings logging trap warnings mtu outside 1500 mtu inside 1500 mtu dmz 1500 no failover asdm image disk0:/asdm-508.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 static (dmz,outside) tcp 41.223.156.106 www 172.16.16.80 www netmask 255.255.255 .255 static (dmz,outside) tcp 41.223.156.107 smtp 172.16.16.25 smtp netmask 255.255.2 55.255 static (inside,dmz) 10.1.0.0 10.1.16.0 netmask 255.255.252.0 access-group OUT-TO-DMZ in interface outside access-group inside in interface inside access-group dmz in interface dmz route outside 0.0.0.0 0.0.0.0 41.223.156.108 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 timeout mgcp-pat 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute http server enable http 10.1.4.0 255.255.252.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet timeout 5 ssh timeout 5 console timeout 0 management-access inside ! ! match default-inspection-traffic ! ! policy-map global_policy class inspection_default inspect dns maximum-length 512 inspect ftp inspect h323 h225 inspect h323 ras inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp ! service-policy global_policy global Cryptochecksum: : end ASA-FW# Please Help. Big Denzel

    Read the article

  • Running perfmon continuously with periodic reports

    - by Sal
    I have a question very similar to this one, but I want to continuously run perfmon, during reboots and throughout the day. Further, I'd like to generate a perfmon report every 10 mins or so. The original question tells me how to run perfmon when the server is restarted, but I don't know how to make perfmon continuously run while throwing periodic files. I've tried setting it as a scheduled task that needs to be done every 10 mins, but this is too sloppy, and when the scheduled task kicks another instance, the current perfmon report writer crashes, and I get a garbage report. I've also tried writing a sloppy batch script that would fire off the task at scheduled intervals, but this is the same problem as the scheduled task. I'm sure I'm just missing something silly, but I don't see it. Ideas? (If it helps, I'm running Windows 7 locally, and I'm trying to set up the processes for boxes running Windows 2008.)

    Read the article

  • Running perfmon continuously with periodic files

    - by Sal
    I have a question very similar to this one, but I want to continuously run perfmon, during reboots and throughout the day. Further, I'd like to generate a perfmon report every 10 mins or so. The original question tells me how to run perfmon when the server is restarted, but I don't know how to make perfmon continuously run while throwing periodic files. I've tried setting it as a scheduled task that needs to be done every 10 mins, but this is too sloppy, and when the scheduled task kicks another instance, the current perfmon report writer crashes, and I get a garbage report. I've also tried writing a sloppy batch script that would fire off the task at scheduled intervals, but this is the same problem as the scheduled task. I'm sure I'm just missing something silly, but I don't see it. Ideas? (If it helps, I'm running Windows 7 locally, and I'm trying to set up the processes for boxes running Windows 2008.)

    Read the article

  • Windows Resource Monitoring Programs

    - by Sal
    I work at a small tech start up managing websites with our own in house server side code. (In production, we use Windows Server 2008 boxes, running Java 6. In our dev boxes, we use Windows 7 running Java 7.) Recently, we had an issue where some of our boxes in production failed, and we didn't have means of trouble shooting, since we keep little to no monitoring logs about a given box's CPU/memory/RAM usage, etc. So, I'm wondering if there is some commercial/freeware that's the standard for performance monitoring/logging. Essentially, I'm just looking for an analytics system that is similar to the Windows Task Manager or the Resource Monitor, that serializes all of its data periodically. Ideally, I'd like to find a program that's also extensible, in case I'd like to add addition monitors in the future.

    Read the article

  • BigData and Customer Experience: Happy Together

    - by Isabel F. Peñuelas
    The two big buzzes of the year may lay closer than it appears. Both concepts intersect at various points: BigData and Return of Investment of Marketing Campaigns On a recent post Big Data Is The Future Of Marketing Jeff Dachis explains very clearly how “Big data analytics finally allows marketers to identify, measure, and manage what is positively impacting their Brand”. Regression analysis applied to big data volumes coming from social media will substitute the failed attempts to justify marketing investments on social media in terms of followers and likes, he continues, “the measurement models applied by marketers on TV Campaigns don´t work on social”, we need to study the data with fresh eyes and maybe then we will start understanding and measuring brand engagemet. Social CRM and BigData The real value of Social CRM start by analyzing mass of big data from social media in order of applying social intelligence techniques that allow us to classify new customer niches and communities and define appropriated strategies to contact potential customers. Gartner Says that the Market for Social CRM is on pace to surpass $1 Billion in Revenue by Year-End 2012 but in words of Zach Hofer-Shall, Analyst at Forrester Research “Social customer relationship management is hard” (The Social CRM Arms Race Heats ). To succeed brands need three things: Investing in new social tools, investing in consultancy and investing in infrastructure for massive data storage and analysis. Neither CeX or BigData are easy and cheap wins. But what are the customer benefits of such investments? Big Data and Brand Engagement Time is the most valuable asset of todays consumers: tired of information overload, exhausted by the terabytes of offering, anxious because of not having the same fast multichannel experience with their services’ marketers or preferred goods providers than the one they found on their social media. Yes, I know you have read this before- me too. But is real. The motto of the Customer Experience philosophy of providing a consistent experience through multiple touchpoints that makes the relationship customer/brand easier and valuable finds it basis on understanding customer/s preferences and context for which BigData analysis is another imperative. In summary, I believe that using BigData Analysis in combination with appropriated CeX strategies and technologies is a promising direction for achieving: efficiency and marketing cost-savings; growing the customer base; and increasing customer conversion and retention. In a world: The Direction of Future Marketing.

    Read the article

  • MySQL and Hadoop Integration - Unlocking New Insight

    - by Mat Keep
    “Big Data” offers the potential for organizations to revolutionize their operations. With the volume of business data doubling every 1.2 years, analysts and business users are discovering very real benefits when integrating and analyzing data from multiple sources, enabling deeper insight into their customers, partners, and business processes. As the world’s most popular open source database, and the most deployed database in the web and cloud, MySQL is a key component of many big data platforms, with Hadoop vendors estimating 80% of deployments are integrated with MySQL. The new Guide to MySQL and Hadoop presents the tools enabling integration between the two data platforms, supporting the data lifecycle from acquisition and organisation to analysis and visualisation / decision, as shown in the figure below The Guide details each of these stages and the technologies supporting them: Acquire: Through new NoSQL APIs, MySQL is able to ingest high volume, high velocity data, without sacrificing ACID guarantees, thereby ensuring data quality. Real-time analytics can also be run against newly acquired data, enabling immediate business insight, before data is loaded into Hadoop. In addition, sensitive data can be pre-processed, for example healthcare or financial services records can be anonymized, before transfer to Hadoop. Organize: Data is transferred from MySQL tables to Hadoop using Apache Sqoop. With the MySQL Binlog (Binary Log) API, users can also invoke real-time change data capture processes to stream updates to HDFS. Analyze: Multi-structured data ingested from multiple sources is consolidated and processed within the Hadoop platform. Decide: The results of the analysis are loaded back to MySQL via Apache Sqoop where they inform real-time operational processes or provide source data for BI analytics tools. So how are companies taking advantage of this today? As an example, on-line retailers can use big data from their web properties to better understand site visitors’ activities, such as paths through the site, pages viewed, and comments posted. This knowledge can be combined with user profiles and purchasing history to gain a better understanding of customers, and the delivery of highly targeted offers. Of course, it is not just in the web that big data can make a difference. Every business activity can benefit, with other common use cases including: - Sentiment analysis; - Marketing campaign analysis; - Customer churn modeling; - Fraud detection; - Research and Development; - Risk Modeling; - And more. As the guide discusses, Big Data is promising a significant transformation of the way organizations leverage data to run their businesses. MySQL can be seamlessly integrated within a Big Data lifecycle, enabling the unification of multi-structured data into common data platforms, taking advantage of all new data sources and yielding more insight than was ever previously imaginable. Download the guide to MySQL and Hadoop integration to learn more. I'd also be interested in hearing about how you are integrating MySQL with Hadoop today, and your requirements for the future, so please use the comments on this blog to share your insights.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-26

    - by Bob Rhubart
    Software Architecture for High Availability in the Cloud | Brian Jimerson Brian Jimerson looks at the paradigm shifts from machine-based architectures to cloud-based architectures when designing fault tolerance, and how enterprise applications need to be engineered to ensure the highest level of availability in the cloud. SOA, Cloud & Service Technology Symposium 2012 London - Special Oracle Discount Registration is now open for one of the premier SOA, Cloud, and Service Technology events. Once again, the Oracle community is well-represented in the session schedule. And now you can save on registration with a special Oracle discount code. Progress 4GL and DB to Oracle and cloud | Tom Laszewski "Getting from client/server based 4GLs and databases where the 4GL is tightly linked to the database to Oracle and the cloud is not easy," says cloud migration expert Tom Laszewski. "The least risky and expensive option...is to use the Progress OpenEdge DataServer for Oracle." Embrace 'big data' now or fall behind the competition, analyst warns | TechTarget TechTarget's Mark Brunelli's story says, in essence, that Big Data is not your fathers Business Intelligence. Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache | Ricardo Ferreira Ferreira illustrates a programmatic way to use the Oracle Coherence API to calculate the total size of a specific cache that resides in the data grid. WebCenter Portal Tutorial Part 7: Integrating Discussions and Link service | Yannick Ongena The latest chapter in Oracle ACE Yannick Ongena's ongoing series. How to Setup JDeveloper workspace for ADF Fusion Applications to run Business Component Tester? | Jack Desai Helpful technical tips from yet another member of the Oracle Fusion Middleware Architecture Team. Big Data for the Enterprise; Software Architecture for High Availability in the Cloud; Why Cloud Computing is a Paradigm Shift - And Why It Isn't This week on the OTN Solution Architect Homepage, along with an updated events list and this weeks list of selected community blog posts. Worst Practices for Big Data | Dain Hansen Dain Hansen shares some insight on what NOT to do if you want to captialize on Big Data. Free Virtual Developer Day - Oracle Fusion Development | Grant Ronald "The online conference will include seminars, hands-on lab and live chats with our technical staff including me!" says Grant Ronald. "And the best bit, it doesn't cost you a single penny. It's free and available right on your desktop." Penguin is Getting Ready for Oracle OpenWorld 2012 | Zeynep Koch Linux fan? Check out Zeynep Koch's post for a list of Linux-based sessions at Oracle OpenWorld 2012 in San Francisco. Amazon Web Services (AWS) Autoscaling | Frank Munz "Autoscaling on AWS can only be configured with lengthy commands from the command line but not from the web cased AWS console," says Frank Munz. "Getting all the parameters right can be tricky." He demonstrates one easy example in this video. Oracle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broin "These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight," says O'Broin. How Much Data Is Created Every Minute? [INFOGRAPHIC] | Mashable Explaining what the "Big" in Big Data really means -- and it's more than a little mind-boggling. Thought for the Day "Real, though miniature, Turing Tests are happening all the time, every day, whenever a person puts up with stupid computer software." — Jaron Lanier Source: SoftwareQuotes.com

    Read the article

  • Piping perfmon logs over DFS

    - by Sal
    I'm running perfmon on several servers, and I'd like all of the output to be piped to one particular server. I'm trying to do this over DFS by modifying the Root directory arg on each of the servers and placing a DFS path like so: Root Directory: \\PERFMON_LOG_REPOSITORY\[MY_COMP_NAME] The trouble is that when I make the Root directory dump the logs to a file over DFS, I always get the following error upon starting up the Collector Set: when attempting to start the data collector set the following system error occurred: access is denied

    Read the article

  • Representing server state with a metric

    - by Sal
    I'm using Microsoft's Performance Monitor to dump logs of RAM, CPU, network, and disk usage from multiple servers. I'd like to get a single metric that captures the state of a given variable to a good extent. For instance, disk usage is pretty stable, so if I take a single reading that says I have 50% remaining disk space, that reading will give me an accurate measure for the day. (The servers aren't doing heavy IO writing.) However, the tricky part here is monitoring CPU and network usage. The logs currently dump the % CPU usage every ten seconds. If I take a straight average of the numbers, it may not represent reality, as % CPU will be much lower during the night than day. (We host websites that sell appliance items.) I'd like to get an average over a span during peak hours (about 5 hours in the day) and present a daily peak hour metric. Of course, there are most likely some readings that will come in as overly spiked (if multiple users pinged the server at once) or no use (a momentary idle state). Is there a standard distribution/test industries use in these situation?

    Read the article

  • UTF-8 and !# shell scripts

    - by sal
    Is there a way to configure bash on Linux (red hat and ubuntu) to allow shell scripts to be encoded in UTF-8? I can't find a simple way to change just one little thing and have the whole system just use UTF-8 files without having to worry about encoding.

    Read the article

  • frequent errors with subversion repository on fat32 on USB memory stick

    - by sal
    I keep a copy of a Subversion Repository on a USB memory stick that is formatted with FAT32. I am using TortoiseSVN on XP and command line svn 1.6.x on Ubuntu and OSX with this memory stick. I notice that I need to do an svn cleanup just about every time or updates and commits will not work. I routinely have errors with .lock and *.svn/text-base/** files getting corrupted. Errors tend to be parameter is incorrect or lock file can not be read Sometimes svn cleanup works and sometimes chflags -R nouchg * Is there anything I can do to prevent this?

    Read the article

  • When things go awry

    - by Phil Factor
    The moment the Entrepreneur opened his mouth on prime-time national TV, spelled out the URL and waxed big on how exciting ‘his’ new website was, I knew I was in for a busy night. I’d designed and built it. All at once, half a million people tried to log into the website. Although all my stress-testing paid off, I have to admit that the network locked up tight long before there was any danger of a database or website problem. Soon afterwards, the Entrepreneur and the Big Boss were there in the autopsy meeting. We picked through all our systems in detail to see how they’d borne the unexpected strain. Mercifully, in view of the sour mood of the Big Boss, it turned out that the only thing we could have done better was buy a bigger pipe to and from the internet. We’d specified that ‘big pipe’ when designing the system. The Big Boss had then railed at the cost and so we’d subsequently compromised. I felt that my design decisions were vindicated. The Big Boss brooded for a while. Then he made the significant comment: “What really ****** me off is the fact that, for ten minutes, we couldn’t take people’s money.” At that point I stopped feeling smug. Had the internet connection been better, the system would have reached its limit and failed rather precipitously, and that wasn’t what he wanted. Then it occurred to me that what had gummed up the connection was all those images on the site, that had made it so impressive for the visitors. If there had been a way to automatically pare down the site to the bare essentials under stress… Hmm. I began to consider disaster-recovery in the broadest sense – maintaining a service in spite of unusual or unexpected events. What he said makes a lot of sense: sacrifice whatever isn’t essential to keep the core service running when we approach the capacity limits. Maybe in IT we should borrow (or revive) the business concept of the ‘Skeleton service’, maintaining only the priority parts under stress, using a process that is well-prepared and carefully rehearsed. How might this work? Whatever the event we have to prepare for, it is all about understanding the priorities; knowing what one can dispense with when the going gets tough. In the event of database disaster, it’s much faster to deploy a skeletal system with only the essential data than to restore the entire system, though there would have to be a reconciliation process to update the revived database retrospectively, once the emergency was over. It isn’t just the database that could be designed for resilience. One could prepare for unusually high traffic in a website by designing a system that degraded gradually to a ‘skeletal’ site, one that maintained the commercial essentials without fat images, JavaScript libraries and razzmatazz. This is all what the Big Boss scathingly called ‘a mere technicality’. It seems to me that what is needed first is a culture of application and database design which acknowledges that we live in a very imperfect world, and react accordingly when things go awry.

    Read the article

  • Why does my co-worker see a different Project file (*.csproj) using Visual Source Safe

    - by Leo Zhang
    Hello everybody, I met a problem which is very strange, my company uses Visual Source Safe to control version,but I found that my team's different member see the same .csproj file in VSS is not the same, it's very strange,can you help me? thanks!! there is a file named IPRA.WinUi.Sal.Sra.csproj in VSS: when Tom log on ,the file 'IPRA.WinUi.Sal.Sra.csproj' is : <Reference Include="Ark.Client.WinUi, Version=2.0.0.0, Culture=neutral, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\ARAF\BusinessFramework\Ark.Client.WinUi.dll</HintPath> </Reference> <Reference Include="Ark.Common.Business, Version=1.0.0.0, Culture=neutral, processorArchitecture=MSIL" /> <Reference Include="Ark.Controls.Business, Version=0.0.0.0, Culture=neutral, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\ARAF\SystemFramework\Ark.Controls.Business.dll</HintPath> </Reference> But when leo log on,the same file 'IPRA.WinUi.Sal.Sra.csproj' is : <Reference Include="Ark.Client.WinUi, Version=2.0.0.0, Culture=neutral, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\ARAF\BusinessFramework\Ark.Client.WinUi.dll</HintPath> </Reference> <Reference Include="Ark.Common.Business, Version=1.0.0.0, Culture=neutral, processorArchitecture=MSIL" /> <SpecificVersion>False</SpecificVersion> <HintPath>..\ARAF\BusinessFramework\Ark.Controls.WinUi.dll</HintPath> <Reference Include="Ark.Controls.Business, Version=0.0.0.0, Culture=neutral, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\ARAF\SystemFramework\Ark.Controls.Business.dll</HintPath> </Reference>

    Read the article

  • How can I create a new Person object correctly in Javascript?

    - by TimDog
    I'm still struggling with this concept. I have two different Person objects, very simply: ;Person1 = (function() { function P (fname, lname) { P.FirstName = fname; P.LastName = lname; return P; } P.FirstName = ''; P.LastName = ''; var prName = 'private'; P.showPrivate = function() { alert(prName); }; return P; })(); ;Person2 = (function() { var prName = 'private'; this.FirstName = ''; this.LastName = ''; this.showPrivate = function() { alert(prName); }; return function(fname, lname) { this.FirstName = fname; this.LastName = lname; } })(); And let's say I invoke them like this: var s = new Array(); //Person1 s.push(new Person1("sal", "smith")); s.push(new Person1("bill", "wonk")); alert(s[0].FirstName); alert(s[1].FirstName); s[1].showPrivate(); //Person2 s.push(new Person2("sal", "smith")); s.push(new Person2("bill", "wonk")); alert(s[2].FirstName); alert(s[3].FirstName); s[3].showPrivate(); The Person1 set alerts "bill" twice, then alerts "private" once -- so it recognizes the showPrivate function, but the local FirstName variable gets overwritten. The second Person2 set alerts "sal", then "bill", but it fails when the showPrivate function is called. The new keyword here works as I'd expect, but showPrivate (which I thought was a publicly exposed function within the closure) is apparently not public. I want to get my object to have distinct copies of all local variables and also expose public methods -- I've been studying closures quite a bit, but I'm still confused on this one. Thanks for your help.

    Read the article

  • Using BWSAL with BWAPI JBridge

    - by pek
    I've been busting my head to find out how to use BWSAL with JBridge... I just can't figure it out.. I'm not that good in C++.. The best I could do is compile the latest JBridge from the repo... But I can't seem to get BSWAL working... What do I need? Compiling JBrigde doesn't seem to have BWSAL in.. I found the directory that BWSAL exists but there is no solution file to build.. I downloaded the latest BWSAL, found the solution file, but have no idea what to do... Compiling it will create an AIModule.dll but if I use this, the JBridge won't work... What I'm trying to do is to simply use BuildingPlacer.getBuildLocationNear... Whenever I call this method a NullPointerException is thrown: java.lang.NullPointerException at org.bwapi.bridge.model.Game.canBuildHere(Game.java:286) at org.bwapi.bridge.sal.BuildingPlacer.canBuildHere(BuildingPlacer.java: 46) at org.bwapi.bridge.sal.BuildingPlacer.canBuildHereWithSpace(BuildingPlacer.java: 60) at org.bwapi.bridge.sal.BuildingPlacer.getBuildLocationNear(BuildingPlacer.java: 121) at com.pekalicious.starplanner.ResourceManager.isInitialReady(ResourceManager.java: 87) at com.pekalicious.starplanner.ResourceManager.update(ResourceManager.java: 37) at com.pekalicious.starplanner.StrategicManager.update(StrategicManager.java: 51) at com.pekalicious.starplanner.StarPlannerBridgeBot.onFrame(StarPlannerBridgeBot.java: 36) How do I make this work? I really need this.. Please. Thank you.

    Read the article

  • SVN merge adding parameters. WTF? Or how to do big merges?

    - by HeavyWave
    I am doing an SVN merge for a branch, and in one of the files I see this: GetQueryReferenceData(int sessionId, Int32 sessionId) Which means that the merge tool just added another parameter without asking any questions. Imagine if it was a call to Substring(0) and in another branch it would be Substring(0,2). That is completely different behavior, how does it even get to decide which one to choose? Good thing it came up during compile time. The problem is that it will not be marked as a conflict and will be merged automatically. That is very dangerous behavior and if you don't have the luxury of having a unit test for every line of code - you are screwed. What am I doing wrong and how to do big merges without the merging tool putting in dangerous changes silently? Is there a merge tool that is not language agnostic? I am using Tortoise SVN.

    Read the article

  • Suggestions for Windows 8 migration [closed]

    - by Big Endian
    I'm thinking of migrating to Windows 8. At first I hated it, but I'm pretty sure the Windows 8 model is the future, and I don't particularly want to end up hating the future like my parents, frustrated and bewildered by anything past Windows XP. I'm currently running Windows 7 and my system has been accumulating some problems. It's probably an accumulation of issues from installing too much software, changing firewall settings, installing Ubuntu alongside Windows, and... well I'm not sure, but my computer has been buggy in unexpected ways lately (freezing and unfreezing, display driver crashing and recovering, and what I call "deep freeze/thaw cycle" where the mouse won't even move for a while). I'm good at solving computer problems, but I can't seem to get to the root of these and my best idea for fixing them is making sure I've backed up every file then re-installing the entire OS. Luckily for me, a new OS is just around the corner so this would be a good time to get two things out of the way at once. The problem I see is that the upgrade options I see are all "seamless". I don't want a seamless upgrade. I want to wipe the slate clean and start all over. Does this mean I will have to buy a full, new copy of Windows 8 rather than one of the cheaper upgrading options? Or does it not make since for me to go to Windows 8 given that I have a laptop, not a tablet? Maybe I should just re-install Windows 7, or even call good enough good enough, try to eliminate the bugs, and start with a fresh slate in 2-3 years after this computer eventually dies entirely from (inevitable) hardware failure. What would be the advantages or disadvantages and costs of each option, how would I go about upgrading to Windows 8 if that's the option I choose, and what is your personal opinion about my situation?

    Read the article

  • IBM System i Permissions on Database views

    - by Big EMPin
    We have an IBM System i running IBM i OS v6r1. On this system, I have created some database views. What I want to do is give a particular user group access to ONLY these views and nothing else within the library in which the views reside. Is this possible? I had a user group that had read only permissions to all tables and views in the library in which my views are located, and access works when the user is under this usergroup. I tried copying the user group, and then assigning permissions to only include the views I have created, and access is denied. Does a user or usergroup also have to have permissions on the table from which the view originates in order to access the view?

    Read the article

  • VMWare Host needing ESXi 5.0 - Running on a 2008 R2 Host?

    - by Great Big Al
    I have a VMWare image supplied by my Phone System provider which manages a contact management interface, they tell me that VMWare EsXi5.0 is the highest supported host. I'm used to MS HyperV and have no VMWare experiance. At present I have this guest running on a simple desktop PC running ESXi5.0.0, I'd idealy like to run this guest (it's Windows 7 with their software already installed and configured) on a Windows 2008R2 server I have available, as I said, I'm used to HyperV and I can't easily identify if there is a version of VMWare that supports ESXi5.0 guests that will run on a Windows 2008 R2 server host. Is there such a product, what's it called, and with one guest can I run it without purchasing a license? Thanks

    Read the article

  • Getting access to physical drives in ESXi v5.5 installation on Dell PowerEdge R710 with PERC 6/i

    - by Big-Blue
    I've acquired a Dell PowerEdge R710 server a few days ago, which includes a PERC 6/i RAID controller. The server is now fitted with a SATA SSD, one SAS drive and four SATA HDD's, all of which I would like to be passed through to ESXi in an "as-is" state, without creating any logical drives in the RAID controller. Now, the ESXi v5.5 installation image I grabbed from the Dell homepage starts just fine but only lists the logical drives and connected flash drives as possible installation targets, not any of the physical drives. If I create a small logical drive on my SSD (which the PERC 6/i detects as SATA-SSD type), the ESXi install wizard lists the SSD value on that drive as false; which is far from optimal. I have also tried disabling the RAID controller entirely in the setup, but that also did not help. Everything that should enable passthrough is enabled in BIOS, but that shouldn't be a concern at this early stage of the ESXi installation. How would I be able to install ESXi v5.5 to a part of my SSD that is connected to the storage controller, while giving it entire physical access to the disk (to allow for SMART values to be read etc.)?

    Read the article

  • Windows user moving to Ubuntu 12.04. Where are the system tools, or equivalents?

    - by Big Endian
    I am a Windows user who has begun experimenting with Ubuntu. Ubuntu seems great, but for all the things it seems like I CAN'T do. How do I get to advanced administration stuff, like the list of drivers, all of the installed software, and something equivalent to Windows' Device Manager. I always heard that Linux was supposed to be very raw, and you had to have lots of computer experience to make it work. This seems just the opposite. Ubuntu seems very modern and user friendly, better in some regards than any operating system I have seen. Unfortunately, I can't find any of the guts of this system beneath all of the user friendly frosting... gunk... crap... stuff. I'm reminded more and more of an Apple computer (except Linux is more affordable :). So how do I peel back this layer and start using the computer? A solution other than installing Gnome 3 would be appreciated.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >