Search Results

Search found 21319 results on 853 pages for 'state management'.

Page 211/853 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Raid 5 mdadm Problem - Help Please

    - by user66260
    My Raid 5 array (4 1tb Disks WD10EARS) had was showing as degraded. I looked and one of the disks wasnt installed, so i re-added it with the mdadm add command. the array is now showing as (null)Array , but cant be mounted if i run: root@warren-P5K-E:/home/warren# sudo mdadm --misc --detail /dev/md0 I get: mdadm: cannot open /dev/md0: No such file or directory and running: root@warren-P5K-E:/home/warren# cat /proc/mdstat gives me: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] unused devices: < none > The data is very important root@warren-P5K-E:/home/warren# mdadm --examine /dev/sda /dev/sda: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b792 - correct Events : 1 Number Major Minor RaidDevice State this 1 8 0 1 spare /dev/sda 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd root@warren-P5K-E:/home/warren# mdadm --examine /dev/sdb /dev/sdb: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b7a0 - correct Events : 1 Number Major Minor RaidDevice State this 0 8 16 0 spare /dev/sdb 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd root@warren-P5K-E:/home/warren# oot@warren-P5K-E:/home/warren# mdadm --examine /dev/sdc /dev/sdc: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b7b4 - correct Events : 1 Number Major Minor RaidDevice State this 2 8 32 2 spare /dev/sdc 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd root@warren-P5K-E:/home/warren# mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b7c6 - correct Events : 1 Number Major Minor RaidDevice State this 3 8 48 3 spare /dev/sdd 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd That on the 4 drives.

    Read the article

  • What is a better abstraction layer for D3D9 and OpenGL vertex data management?

    - by Sam Hocevar
    My rendering code has always been OpenGL. I now need to support a platform that does not have OpenGL, so I have to add an abstraction layer that wraps OpenGL and Direct3D 9. I will support Direct3D 11 later. TL;DR: the differences between OpenGL and Direct3D cause redundancy for the programmer, and the data layout feels flaky. For now, my API works a bit like this. This is how a shader is created: Shader *shader = Shader::Create( " ... GLSL vertex shader ... ", " ... GLSL pixel shader ... ", " ... HLSL vertex shader ... ", " ... HLSL pixel shader ... "); ShaderAttrib a1 = shader->GetAttribLocation("Point", VertexUsage::Position, 0); ShaderAttrib a2 = shader->GetAttribLocation("TexCoord", VertexUsage::TexCoord, 0); ShaderAttrib a3 = shader->GetAttribLocation("Data", VertexUsage::TexCoord, 1); ShaderUniform u1 = shader->GetUniformLocation("WorldMatrix"); ShaderUniform u2 = shader->GetUniformLocation("Zoom"); There is already a problem here: once a Direct3D shader is compiled, there is no way to query an input attribute by its name; apparently only the semantics stay meaningful. This is why GetAttribLocation has these extra arguments, which get hidden in ShaderAttrib. Now this is how I create a vertex declaration and two vertex buffers: VertexDeclaration *decl = VertexDeclaration::Create( VertexStream<vec3,vec2>(VertexUsage::Position, 0, VertexUsage::TexCoord, 0), VertexStream<vec4>(VertexUsage::TexCoord, 1)); VertexBuffer *vb1 = new VertexBuffer(NUM * (sizeof(vec3) + sizeof(vec2)); VertexBuffer *vb2 = new VertexBuffer(NUM * sizeof(vec4)); Another problem: the information VertexUsage::Position, 0 is totally useless to the OpenGL/GLSL backend because it does not care about semantics. Once the vertex buffers have been filled with or pointed at data, this is the rendering code: shader->Bind(); shader->SetUniform(u1, GetWorldMatrix()); shader->SetUniform(u2, blah); decl->Bind(); decl->SetStream(vb1, a1, a2); decl->SetStream(vb2, a3); decl->DrawPrimitives(VertexPrimitive::Triangle, NUM / 3); decl->Unbind(); shader->Unbind(); You see that decl is a bit more than just a D3D-like vertex declaration, it kinda takes care of rendering as well. Does this make sense at all? What would be a cleaner design? Or a good source of inspiration?

    Read the article

  • Como estão os seus projetos em TI? ALM (Application lifecycle management) - Parte 1

    - by johnywercley
    O gráfico mostra um número assustador, em outras palavras, no mundo inteiro as coisas não andam bem, são pesquisas feitas por um importante orgão o “Stand Group”. Eles nos chamam atenção a quantidade de projetos com problemas, fazendo uma análise primária, somando a parte verde com azul veremos a porcentagem de projetos TI com problemas, projetos que chegam a de fato dar certo, são os de cores vermelhas, um número muito baixo. Se você fosse hoje investidor financeiro e tivesse que fazer um projeto...(read more)

    Read the article

  • Enterprise MDM: Rationalizing Reference Data in a Fast Changing Environment

    - by Mala Narasimharajan
    By Rahul Kamath Enterprises must move at a rapid pace to establish and retain global market leadership by continuously focusing on operational efficiency, customer intimacy and relentless execution. Reference Data Management    As multi-national companies with a presence in multiple industry categories, market segments, and geographies, their ability to proactively manage changes and harness them to align their front office with back-office operations and performance management initiatives is critical to make the proverbial elephant dance. Managing reference data including types and codes, business taxonomies, complex relationships as well as mappings represent a key component of the broader agenda for enabling flexibility and agility, without sacrificing enterprise-level consistency, regulatory compliance and control. Financial Transformation  Periodically, companies find that processes implemented a decade or more ago no longer mirror the way of doing business and seek to proactively transform how they operate their business and underlying processes. Financial transformation often begins with the redesign of one’s chart of accounts. The ability to model and redesign one’s chart of accounts collaboratively, quickly validate against historical transaction bases and secure business buy-in across multiple line of business stakeholders, while continuing to manage changes within the legacy general ledger systems and downstream analytical applications while piloting the in-flight transformation can mean the difference between controlled success and project failure. Attend the session titled CON8275 - Oracle Hyperion Data Relationship Management: Enabling Enterprise Transformation at Oracle Openworld on Monday, October 1, 2012 at 4:45pm in Ballroom A of the InterContinental Hotel to learn how Oracle’s Data Relationship Management solution can help you stay ahead of the competition and proactively harness master (and reference) data changes to transform your enterprise. Hear in-depth customer testimonials from GE Healthcare and Old Mutual South Africa to learn how others have harnessed this technology effectively to build enduring competitive advantage through business process innovation and investments in master data governance. Hear GE Healthcare discuss how DRM has enabled financial transformation, ERP consolidation, mergers and acquisitions, and the alignment reference data across financial and management reporting applications. Also, learn how Old Mutual SA has upgraded to EBS R12 Financials and is transforming the management of chart of accounts for corporate reporting. Separately, an esteemed panel of DRM customers including Cisco Systems, Nationwide Insurance, Ralcorp Holdings and Mentor Graphics will discuss their perspectives on how DRM has helped them address business challenges associated with enterprise MDM including major change management initiatives including financial transformations, corporate restructuring, mergers & acquisitions, and the rationalization of financial and analytical master reference data to support alternate business perspectives for the alignment of EPM/BI initiatives. Attend the session titled CON9377 - Customer Showcase: Success with Oracle Hyperion Data Relationship Management at Openworld on Thursday, October 4, 2012 at 12:45pm in Ballroom of the InterContinental Hotel to interact with our esteemed speakers first hand.

    Read the article

  • links for 2011-02-08

    - by Bob Rhubart
    When It Comes to Data Integration, Oracle Is the Right Choice (tags: ping.fm) When It Comes to Data Integration, Oracle Is the Right Choice (tags: ping.fm) Webcast: Webcast: Deploy Oracle VM Templates for Oracle E-Business Suite and Oracle PeopleSoft Enterprise Applications. Feb 15. Event Date: 02/15/2011 9:00am PT / Noon ET. Featured Speakers: Adam Hawley (Oracle Senior Director, Product Management, Virtualization), Ivo Dujmovic (Oracle Director, Technology Integration), Greg Kelly (Oracle Product Strategy Manager - PeopleTools). (tags: oracle virtualization peoplesoft) Webcast: Managing Oracle Exadata with Oracle Enterprise Manager 11g Thursday, February 10, 2011 - 10 a.m. PT/1 p.m. ET. Ask Oracle experts questions and learn firsthand how to efficiently manage all stages of Oracle Exadata’s lifecycle, from testing to deployment. (tags: oracle exalogic enterprisemanager) Arthur Cole: Winning the Consolidated Data Center Future | ITBusinessEdge.com "According to InformationWeek, the amount of data under management is increasing by about 20 percent per year, with some organizations having to deal with 50 percent or more. That means capacity needs to double every two or three years." - Arthur Cole (tags: dataconsolidation enterprisearchitecture) Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products (Telecommunications Architecture Corner) Raul Goycoolea's post examines "how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry." (tags: oracle otn enterprisearchitecture) Richard Veryard on Architecture: What is an EA vendor? "Even some people who insist that enterprise architecture shouldn't be thought of as merely software architecture seem to think that 'tools' only means 'software tools.'" - Richard Veryard (tags: enterprisearchitecture) MDM for Tax Authorities (Oracle Master Data Management) "Tax Authorities face a multitude of IT challenges," says David Butler. "Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex." (tags: oracle otn MDM ITarchitecture) Bernard Golden: How Cloud Computing Changes IT Staffs | CIO.com | CIO.com "Enterprise architects become more important" tops Bernard's list of changes. (tags: cloudcomputing staffing cio enterprisearchitecture) Martijn Linssen: Social Enterprise Magic Quadrant "Revolutions usually go wrong, where evolutions usually go right." - Martijn Linssen (tags: socialcomputing enterprise2.0) Why Do IT Roles Fail? | CIO "The roles that come up most often are the ones that are not directly building or maintaining systems. These include architecture, planning, vendor management, relationship management, PMO, and security." - Marc Cecere (tags: softwarearchitecture technologyroles) We're Hiring! - Server and Desktop Virtualization Product Management (Oracle's Virtualization Blog) Adam Hawley with information on an opportunity for qualified job seekers. (tags: oracle otn employment virtualization)

    Read the article

  • How do I totally disable all forms of session management in Xubuntu?

    - by Evan Carroll
    Xubuntu remembers things like xfce-terminal and chromium whenever my system starts up. I don't want that function to happen, ever. I want every restart to be fresh. How do I disable this functionality. I've seen numerous tutorials that unchecking the "Save sessions for future logins" screen in Log Off would work, but it does't seem to work for me. When the system comes back the other applications are open. I've also deleted the files in autostart, as per this question.

    Read the article

  • Architecture Standards &ndash; BPMN vs. BPEL for Business Process Management

    - by pat.shepherd
    I get asked often which business process standard an organization should use; BPMN or BPEL?  As I explain to folks, they both have strengths.  Here is a great article that helps understand the benefits of both and where to use them.  The good news is that, with Oracle SOA Suite and BPM suite, you have the option and flexibility to use both in the same SCA model and runtime container.  Good stuff. Here is the great article that Mark Nelson wrote: The right tool for the right job BPEL and BPMN are both ‘languages’ or ‘notations’ for describing and executing business processes. Both are open standards. Most business process engines will support one or the other of these languages. Oracle however has chosen to support both and treat them as equals. This means that you have the freedom to choose which language to use on a process by process basis. And you can freely mix and match, even within a single composite. (A composite is the deployment unit in an SCA environment.) So why support both? Well it turns out that BPEL is really well suited to modeling some kinds of processes and BPMN is really well suited to modeling other kinds of processes. Of course there is a pretty significant overlap where either will do a great job What BPM adds to SOA Suite | RedStack

    Read the article

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • VPN pptp connection Unable to pass through linux iptables

    - by user221844
    I have set up a windows VPN server behind Linux - Ubuntu box that is working as firewall and proxy server. Now I want people from outside to be able to connect to the VPN server, but the connection is not being established and I get on the client an error 619. I have checked the problem on the internet and it seems a firewall issue. what should I do to make the connection established through the firewall? here is below the information about my setup Firewall-External-IF-IP: 172.16.1.100 Firewall-LAN-IF-IP: 192.168.1.1 VPN-Server-IP: 192.168.1.10 and below is my iptables file content: #Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *filter :INPUT ACCEPT [162000:140437619] :FORWARD ACCEPT [23282:27196133] :OUTPUT ACCEPT [185778:143961739] :LOGGING - [0:0] -A INPUT -p gre -j ACCEPT -A INPUT -s 192.168.1.10/32 -p tcp -m tcp --sport 1723 -j ACCEPT -A INPUT -s 192.168.1.10/32 -p udp -m udp --sport 1723 -j ACCEPT -A FORWARD -s 192.168.1.0/24 -o EXT_IF -j ACCEPT -A FORWARD -s 192.168.1.0/24 -i EXT_IF -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 192.168.1.10/32 -i EXT_IF -o INT_IF -p tcp -m tcp --dport 1723 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.1.10/32 -i INT_IF -o EXT_IF -p tcp -m tcp --sport 1723 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 192.168.1.10/32 -i EXT_IF -o INT_IF -p gre -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.1.10/32 -i INT_IF -o EXT_IF -p gre -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -p gre -j ACCEPT -A OUTPUT -d 192.168.1.10/32 -p tcp -m tcp --dport 1723 -j ACCEPT -A OUTPUT -d 192.168.1.10/32 -p udp -m udp --dport 1723 -j ACCEPT COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *nat :PREROUTING ACCEPT [17865:1053739] :INPUT ACCEPT [5490:357281] :OUTPUT ACCEPT [3723:223677] :POSTROUTING ACCEPT [3726:223870] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p tcp -m tcp --dport 1723 -j DNAT --to-destination 192.168.1.10 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p gre -j DNAT --to-destination 192.168.1.10 -A PREROUTING -i -h -A POSTROUTING -s 192.168.1.0/24 -o EXT_IF -j MASQUERADE COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *mangle :PREROUTING ACCEPT [22695965:17811993005] :INPUT ACCEPT [13818180:11522330171] :PREROUTING ACCEPT [17865:1053739] :INPUT ACCEPT [5490:357281] :OUTPUT ACCEPT [3723:223677] :POSTROUTING ACCEPT [3726:223870] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p tcp -m tcp --dport 1723 -j DNAT --to-destination 192.168.1.10 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p gre -j DNAT --to-destination 192.168.1.10 -A PREROUTING -i -h -A POSTROUTING -s 192.168.1.0/24 -o EXT_IF -j MASQUERADE COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *mangle :PREROUTING ACCEPT [22695965:17811993005] :INPUT ACCEPT [13818180:11522330171] :FORWARD ACCEPT [8527694:6271564562] :OUTPUT ACCEPT [14748508:11899678536] :POSTROUTING ACCEPT [23271280:18170828012] COMMIT # Completed on Thu May 29 12:40:18 2014 hope that I find the solution here ....!! :(

    Read the article

  • How do I login to SQL Server without having to use "Run as Administrator" when starting Management S

    - by MedicineMan
    When I start Management Studio, unless I use the "Run as Administrator" selection, I cannot login to my local SQL Server. Is this normal? I am a normal developer and don't believe I have a need for high security on my local machine. I'm running SQL Server 2008, Windows 7. The error I get is: Cannot connect to (local) Additional Information Login failed for user 'MYCOMPUTER\MyName'. (Microsoft SQL Server, Error: 18456)

    Read the article

  • Is there a remote desktop management tool that I can email to people?

    - by Matt 'Trouble' Esse
    I often need to remotely manage PC and Macs for desktop support. I'm after a remote desktop management support tool that I could email (or send a url) that the customer could click on (or run) and I could then remotely manage their PC/Mac A tool that could work on both operating systems would be great but not mandatory (a separate tool for both/either will suffice) A tool that has an iPhone App would be fantastic too but this would just be very much a 'wish list' Looking forward to your suggestion!

    Read the article

  • Setting up Ubuntu Server as a Router with DHCPD and 3 Ethernet devices

    - by cengbrecht
    My configuration: Ubuntu 12.04 DHCP3-server eth0, eth1, eth2 Edit: removed br0&br1 eth0 is the external connection eth1 & eth2 are the internal network eth1 and eth2 are supposed to be seperate networks of student/teachers respectivly. What I would like to have is the internet from external device bridged to device 1 and 2, with the DHCP server controlling the two internal devices. Its already working with DHCP, the part I am stuck on is bridging for internet. I have setup a script that I found here: Router With the original script he linked here: Ubuntu Router Guide echo -e "\n\nLoading simple rc.firewall-iptables version $FWVER..\n" IPTABLES=/sbin/iptables #IPTABLES=/usr/local/sbin/iptables DEPMOD=/sbin/depmod MODPROBE=/sbin/modprobe EXTIF="eth0" INTIF="eth1" INTIF2="eth2" echo " External Interface: $EXTIF" echo " Internal Interface: $INTIF" echo " Internal Interface: $INTIF2" EXTIP=`ifconfig $EXTIF | grep 'inet addr:' | sed 's#.*inet addr\:\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\).*#\1#g'` echo " External IP: $EXTIP" #====================================================================== #== No editing beyond this line is required for initial MASQ testing == The rest of the script below this is as is. I can get ip from the eth1 & eth2 devices, and my computer can see them, and them it, however, internet is not being passed through. If you need more information please just let me know. EDIT: So I had a 255.255.254.0 network, I believe that was causing the issue. Not sure if it will matter on the second card, I will test later. After changing the subnet to 255.255.255.0 the pings will pass through, however, I cannot get DNS requests to pass? My new Config for Firewall Rules # /etc/iptables.up.rules # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *mangle :PREROUTING ACCEPT [39:4283] :INPUT ACCEPT [39:4283] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [12:4884] :POSTROUTING ACCEPT [13:5145] COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -j LOG -A FORWARD -m state -i eth1 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth2 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth1 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth2 --state NEW,ESTABLISHED,RELATED -j ACCEPT COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *nat :INPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE -A POSTROUTING -o eth0 -j SNAT --to-source 192.168.1.25 COMMIT # Completed on Wed Nov 28 19:43:28 2012 Not sure what else you may need, but I am using Webmin to control the server(Needed for the operators on site to know how to use it.) If you could explain it as standard CLI commands, or edits to this file directly then we should be ok. :) And thanks again Erik, I do believe your edits did help.

    Read the article

  • New Podcast Available: Product Value Chain Management: How Oracle is Taking the Lead on Next Gen Enterprise PLM

    - by Terri Hiskey
    A new podcast on how Oracle is taking the lead in Enterprise PLM with our Product Value Chain solution is now available. In case you're not yet familiar with the concept of Product Value Chain, its an integrated business model powered by Oracle that offers executives the ability to collectively leverage enterprise Agile PLM, Product Data Hub, Enterprise Data Quality and AutoVue Enterprise Visualization and other industry-leading Oracle applications for incremental value. In this quick, 10 minute podcast, you'll hear John Kelley, VP PLM Product Strategy, and Terri Hiskey, Director, PLM Product Marketing, discuss Oracle's vision for next generation enterprise PLM: the Product Value Chain. http://feedproxy.google.com/~r/OracleAppcast/~3/jxAED7ugMEc/11525926_Enterprise_PLM_040612.mp3

    Read the article

  • iptables : how to correctly allow incoming and outgoing traffic for certain ports?

    - by Rubytastic
    Im trying to get incoming and outgoing traffic to be enabled on specific ports, because I block everything at the end of the iptables rules. INPUT and FORWARD reject. What would be the appropiate way to open certain ports for all traffic incoming and outgoing? From docs I found below but one has to really define both lines? iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT I try to open ports for xmpp service and some other deamons running on server. Rules: *filter # Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -d 127.0.0.0/8 -j REJECT # Accept all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow all outbound traffic - you can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allow HTTP # Prevent DDOS attacks (http://blog.bodhizazen.net/linux/prevent-dos-with-iptables/) # Disallow HTTPS -A INPUT -p tcp --dport 80 -m state --state NEW -m limit --limit 50/minute --limit-burst 200 -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -m limit --limit 50/second --limit-burst 50 -j ACCEPT -A INPUT -p tcp --dport 443 -j DROP # Allow SSH connections # The -dport number should be the same port number you set in sshd_config -A INPUT -p tcp -s <myip> --dport ssh -j ACCEPT -A INPUT -p tcp -s <myip> --dport 5984 -j ACCEPT -A INPUT -p tcp --dport ssh -j REJECT # Attempt to block portscans # Anyone who tried to portscan us is locked out for an entire day. -A INPUT -m recent --name portscan --rcheck --seconds 86400 -j DROP -A FORWARD -m recent --name portscan --rcheck --seconds 86400 -j DROP # Once the day has passed, remove them from the portscan list -A INPUT -m recent --name portscan --remove -A FORWARD -m recent --name portscan --remove # These rules add scanners to the portscan list, and log the attempt. -A INPUT -p tcp -m tcp --dport 139 -m recent --name portscan --set -j LOG --log-prefix "Portscan:" -A INPUT -p tcp -m tcp --dport 139 -m recent --name portscan --set -j DROP -A FORWARD -p tcp -m tcp --dport 139 -m recent --name portscan --set -j LOG --log-prefix "Portscan:" -A FORWARD -p tcp -m tcp --dport 139 -m recent --name portscan --set -j DROP # Stop smurf attacks -A INPUT -p icmp -m icmp --icmp-type address-mask-request -j DROP -A INPUT -p icmp -m icmp --icmp-type timestamp-request -j DROP -A INPUT -p icmp -m icmp -j DROP # Drop excessive RST packets to avoid smurf attacks -A INPUT -p tcp -m tcp --tcp-flags RST RST -m limit --limit 2/second --limit-burst 2 -j ACCEPT # Don't allow pings through -A INPUT -p icmp -m icmp --icmp-type 8 -j DROP # Log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT

    Read the article

  • What are options for 3rd Party Centralized Software Settings Management?

    - by Jeff Martin
    I am an architect in an enterprise looking to build a SaaS solution. Our products are distributed over many different deployable containers, Web Services, Web UI's, etc. I am looking for some open-source or 3rd party software solution to manage the settings of our application. These would be similar to the settings you might find in Word or Eclipse or Visual Studio. The settings would control various behaviors and features of the product. (Probably not settings like which database to connect to but more like, should I show line numbers on the page or not by default..). Ideally, we would be able to store values for different dimensions (by tenant, by user, by application environment... ) Because we have so many different deployables, I am looking for a centralized solution that can provide a web service that each of the deployables can get their individual settings from. Does anyone know of a centralized service providing this sort of features or give me some help in searching for an alternative to rolling our own?

    Read the article

  • A good file management/hosting/storage web service with embed-function?

    - by Andreas
    I am looking for a file management web service that lets me integrate the directory-view into a commercial website. Another requirement: User can register themselves, but need to be approved, before being able to download files. So something like box.net, but with more than just a a flash-widget. I would prefer some javascript, that can be embedded. Thanks for any recommendations.

    Read the article

  • How do I simplify a 2D game grid for level management while keeping its by-pixel features?

    - by Eric Thoma
    (I cross-posted this from StackOverflow as this seems to be a more appropriate forum. I've looked around a little here and I did not find an answer, so I hope this is not a recurring question.) This is a question dealing with 2D world design. I am playing around by creating a 2D bird's eye view shooter game, and I am looking to make the game sleek and advanced. I hope to be able to write physics so projectiles have momentum and knock-down properties. I am immediately running into the problem of world design. I need a way to have level files that store everything there is about a game. This is easiest by just having a grid of objects. But there are thin-walls and other objects that don't seem to fit into a traditional cell of a grid. I want to be able to fit all these together so I can streamline level design; so I don't have to put in the exact pixel-specific start and end of a wall. There doesn't seem to be an obvious translation from level file to game without forcing myself into a pacman-life scenario, meaning a scenario where the game feels boxy and discrete. There is a contrast between the smoothly (relatively) moving characters and finite jumps in a grid. I would appreciate an answer that would describe implementation options or point me to resources that do. I would also appreciate references to sites that teach game design. The language I am using is Java (although I would love to use C or C++, but I can never find convenient resources in those languages). Thank you for any answers. Please leave any questions in the space below; I will be able to answer them later tonight (28th Nov).

    Read the article

  • what are the duties of a software control management (scm) engineer in a large company?

    - by Alex. S.
    I'm curious about what are the canonical responsibilities of such specialized role. Normally, I expected this to be part of the tasks of a normal developer, but in large companies I know this role is to be fulfilled by an engineer in his own. In my current company, there is a possibility for a new opening in a SCM position, so I could apply, but first I would like to hear about what, in your experience, characterize best this role.

    Read the article

  • When is a glue or management class doing too much?

    - by jprete
    I'm prone to building centralized classes that manage the other classes in my designs. It doesn't store everything itself, but most data requests would go to the "manager" first. While looking at an answer to this question I noticed the term "God Object". Wikipedia lists it as an antipattern, understandably. Where is the line between a legitimate glue class, or module, that passes data and messages from place to place, and a class that is doing too much?

    Read the article

  • Could it be more efficient for systems in general to do away with Stacks and just use Heap for memory management?

    - by Dark Templar
    It seems to me that everything that can be done with a stack can be done with the heap, but not everything that can be done with the heap can be done with the stack. Is that correct? Then for simplicity's sake, and even if we do lose a little amount of performance with certain workloads, couldn't it be better to just go with one standard (ie, the heap)? Think of the trade-off between modularity and performance. I know that isn't the best way to describe this scenario, but in general it seems that simplicity of understanding and design could be a better option even if there is a potential for better performance.

    Read the article

  • What is the/Is there a right way to tell management that our code sucks?

    - by Azkar
    Our code is bad. It might not have always been considered bad, but it is bad and is only going downhill. I started fresh out of college less than a year ago, and many of the things in our code puzzle me beyond belief. At first I figured that as the new guy I should keep my mouth shut until I learned a little more about our code base, but I've seen plenty to know that it's bad. Some of the highlights: We still use frames (try getting something out of a querystring, almost impossible) VBScript Source Safe We 'use' .NET - by that I mean we have .net wrappers that call COM DLLs making it almost impossible to debug easily Everything is basically one giant function Code is not maintainable. Each page has multiple files that are created every time a new page is made. The main page basically does Response.Write() a bunch of times to render the HTML (runat="server"? no way). After that there can be a lot of logic on the client side (VBScript), and finally the page submits to itself (often time storing many things in hidden fields) where it then posts to a processing page which can do things such as save the data to the database. The specifications we get are laughable. Often times they call for things like "auto-populate field X with either field Y or field Z" with no indication of when to choose field Y or field Z. I'm sure some of this is a result of not being employed at a software company, but I feel as if people writing software should at least care about the quality of their code. I can't even imagine that if I were to bring something up that anything would be done soon, as there is a large deadline looming, but we are continuing to write bad code and use bad practices. What can I do? How do I even bring these issues up? 75% of my team agree with me and have brought up these issues in the past, yet nothing gets changed.

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >