Search Results

Search found 64086 results on 2564 pages for 'algebraic data types'.

Page 101/2564 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Hadoop growing pains

    - by Piotr Rodak
    This post is not going to be about SQL Server. I have been reading recently more and more about “Big Data” – very catchy term that describes untamed increase of the data that mankind is producing each day and the struggle to capture the meaning of these data. Ten years ago, and perhaps even three years ago this need was not so recognized. Increasing number of smartphones and discernable trend of mainstream Internet traffic moving to the smartphone generated one means that there is bigger and bigger stream of information that has to be stored, transformed, analysed and perhaps monetized. The nature of this traffic makes if very difficult to wrap it into boundaries of relational database engines. The amount of data makes it near to impossible to process them in relational databases within reasonable time. This is where ‘cloud’ technologies come to play. I just read a good article about the growing pains of Hadoop, which became one of the leading players on distributed processing arena within last year or two. Toby Baer concludes in it that lack of enterprise ready toolsets hinders Hadoop’s apprehension in the enterprise world. While this is true, something else drew my attention. According to the article there are already about half of a dozen of commercially supported distributions of Hadoop. For me, who has not been involved into intricacies of open-source world, this is quite interesting observation. On one hand, it is good that there is competition as it is beneficial in the end to the customer. On the other hand, the customer is faced with difficulty of choosing the right distribution. In future, when Hadoop distributions fork even more, this choice will be even harder. The distributions will have overlapping sets of features, yet will be quite incompatible with each other. I suppose it will take a few years until leaders emerge and the market will begin to resemble what we see in Linux world. There are myriads of distributions, but only few are acknowledged by the industry as enterprise standard. Others are honed by bearded individuals with too much time to spend. In any way, the third fact I can’t help but notice about the proliferation of distributions of Hadoop is that IT professionals will have jobs.   BuzzNet Tags: Hadoop,Big Data,Enterprise IT

    Read the article

  • OAM11gR2: Enabling SSL in the Data Store

    - by Ekta Malik
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Enabling SSL in the Data Store of OAM11gR2 comprises of the below mentioned steps. Import the certificate/s required for establishing the trust with the Store(backend) in the keystore(cacerts) on the machine hosting OAM's Weblogic Admin server Restart the Weblogic Admin server Specify the <Hostname>:<SSL port> in the "Location" field of the Data Store and select the "Enable SSL" checkbox Pre-requisite:- Certificate/s to be imported are available for import Data Store has already been created using OAM admin console and the connection to the store is successful on non-SSL port( though one can always create a Data Store with SSL settings on the first go) Steps for importing the certificate/s:- One can use the keytool utility that comes bundled with JDK to import the certificate. The step for importing the certificate would be same for self-signed and third party certificates (like VeriSign) $JAVA_HOME/bin/keytool -import -v -noprompt -trustcacerts -alias <aliasname> -file <Path to the certificate file> -keystore $JAVA_HOME/jre/lib/security/cacerts Here $JAVA_HOME refers to the path of JDK install directory Note: In case multiple certificates are required for establishing the trust, import all those certificates using the same keytool command mentioned above  One can verify the import of the certificate/s by using the below mentioned command $JAVA_HOME/bin/keytool -list -alias <aliasname>-v -keystore $JAVA_HOME/jre/lib/security/cacerts When the trust gets established for the SSL communication, specifying the SSL specific settings in the Data Store (via OAM admin console) wouldn't result into the previously seen error (when Certificates are yet to be imported) and the "Test Connection" would be successful.

    Read the article

  • Security Controls on data for P6 Analytics

    - by Jeffrey McDaniel
    The Star database and P6 Analytics calculates security based on P6 security using OBS, global, project, cost, and resource security considerations. If there is some concern that users are not seeing expected data in P6 Analytics here are some areas to review: 1. Determining if a user has cost security is based on the Project level security privileges - either View Project Costs/Financials or Edit EPS Financials. If expecting to see costs make sure one of these permissions are allocated.  2. User must have OBS access on a Project. Not WBS level. WBS level security is not supported. Make sure user has OBS on project level.  3. Resource Access is determined by what is granted in P6. Verify the resource access granted to this user in P6. Resource security is hierarchical. Project access will override Resource access based on the way security policies are applied. 4. Module access must be given to a P6 user for that user to come over into Star/P6 Analytics. For earlier version of RDB there was a report_user_flag on the Users table. This flag field is no longer used after P6 Reporting Database 2.1. 5. For P6 Reporting Database versions 2.2 and higher, the Extended Schema Security service must be run to calculate all security. Any changes to privileges or security this service must be rerun before any ETL. 6. In P6 Analytics 2.0 or higher, a Weblogic user must exist that matches the P6 username. For example user Tim must exist in P6 and Weblogic users for Tim to be able to log into P6 Analytics and access data based on  P6 security.  In earlier versions the username needed to exist in RPD. 7. Cache in OBI is another area that can sometimes make it seem a user isn't seeing the data they expect. While cache can be beneficial for performance in OBI. If the data is outdated it can retrieve older, stale data. Clearing or turning off cache when rerunning a query can determine if the returned result set was from cache or from the database.

    Read the article

  • Fast programmatic compare of "timetable" data

    - by Brendan Green
    Consider train timetable data, where each service (or "run") has a data structure as such: public class TimeTable { public int Id {get;set;} public List<Run> Runs {get;set;} } public class Run { public List<Stop> Stops {get;set;} public int RunId {get;set;} } public class Stop { public int StationId {get;set;} public TimeSpan? StopTime {get;set;} public bool IsStop {get;set;} } We have a list of runs that operate against a particular line (the TimeTable class). Further, whilst we have a set collection of stations that are on a line, not all runs stop at all stations (that is, IsStop would be false, and StopTime would be null). Now, imagine that we have received the initial timetable, processed it, and loaded it into the above data structure. Once the initial load is complete, it is persisted into a database - the data structure is used only to load the timetable from its source and to persist it to the database. We are now receiving an updated timetable. The updated timetable may or may not have any changes to it - we don't know and are not told whether any changes are present. What I would like to do is perform a compare for each run in an efficient manner. I don't want to simply replace each run. Instead, I want to have a background task that runs periodically that downloads the updated timetable dataset, and then compares it to the current timetable. If differences are found, some action (not relevant to the question) will take place. I was initially thinking of some sort of checksum process, where I could, for example, load both runs (that is, the one from the new timetable received and the one that has been persisted to the database) into the data structure and then add up all the hour components of the StopTime, and all the minute components of the StopTime and compare the results (i.e. both the sum of Hours and sum of Minutes would be the same, and differences introduced if a stop time is changed, a stop deleted or a new stop added). Would that be a valid way to check for differences, or is there a better way to approach this problem? I can see a problem that, for example, one stop is changed to be 2 minutes earlier, and another changed to be 2 minutes later would have a net zero change. Or am I over thinking this, and would it just be simpler to brute check all stops to ensure that The updated run stops at the same stations; and Each stop is at the same time

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to Discovering XML Data Type Methods

    - by pinaldave
    This blog post is inspired from SQL Interoperability Joes 2 Pros: A Guide to Integrating SQL Server with XML, C#, and PowerShell – SQL Exam Prep Series 70-433 – Volume 5. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to Discovering XML Data Type Methods – A Primer. In the article we discussed various basics terminology of the XML. The article further covers following important concepts of XML. What are XML Data Type Methods The query() Method The value() Method The exist() Method The modify() Method Above five are the most important concepts related to XML and SQL Server. There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1.) Which method returns an XML fragment from the source XML? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) 2.) Which XML data type method returns a “1” if found and “0” if the specified XPath is not found in the source XML? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) 3.) Which XML data type method allows you to pick the data type of the value that is returned from the source XML? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) 4.) Which method will not work with a SQL SELECT statement? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 1 2) 3 3) 2 4) 4 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Accessing SQL Server data from iOS apps

    - by RobertChipperfield
    Almost all mobile apps need access to external data to be valuable. With a huge amount of existing business data residing in Microsoft SQL Server databases, and an ever-increasing drive to make more and more available to mobile users, how do you marry the rather separate worlds of Microsoft's SQL Server and Apple's iOS devices? The classic answer: write a web service layer Look at any of the questions on this topic asked in Internet discussion forums, and you'll inevitably see the answer, "just write a web service and use that!". But what does this process gain? For a well-designed database with a solid security model, and business logic in the database, writing a custom web service on top of this just to access some of the data from a different platform seems inefficient and unnecessary. Desktop applications interact with the SQL Server directly - why should mobile apps be any different? The better answer: the iSql SDK Working along the lines of "if you do something more than once, make it shared," we set about coming up with a better solution for the general case. And so the iSql SDK was born: sitting between SQL Server and your iOS apps, it provides the simple API you're used to if you've been developing desktop apps using the Microsoft SQL Native Client. It turns out a web service remained a sensible idea: HTTP is much more suited to the Big Bad Internet than SQL Server's native TDS protocol, removing the need for complex configuration, firewall configuration, and the like. However, rather than writing a web service for every app that needs data access, we made the web service generic, serving only as a proxy between the SQL Server and a client library integrated into the iPhone or iPad app. This client library handles all the network communication, and provides a clean API. OSQL in 25 lines of code As an example of how to use the API, I put together a very simple app that allowed the user to enter one or more SQL statements, and displayed the results in a rather primitively formatted text field. The total amount of Objective-C code responsible for doing the work? About 25 lines. You can see this in action in the demo video. Beta out now - your chance to give us your suggestions! We've released the iSql SDK as a beta on the MobileFoo website: you're welcome to download a copy, have a play in your own apps, and let us know what we've missed using the Feedback button on the site. Software development should be fun and rewarding: no-one wants to spend their time writing boiler-plate code over and over again, so stop writing the same web service code, and start doing exciting things in the new world of mobile data!

    Read the article

  • Import Data from Excel sheet to DB Table through OAF page

    - by PRajkumar
    1. Create a New Workspace and Project File > New > General > Workspace Configured for Oracle Applications File Name – PrajkumarImportxlsDemo   Automatically a new OA Project will also be created   Project Name -- ImportxlsDemo Default Package -- prajkumar.oracle.apps.fnd.importxlsdemo   2. Add JAR file jxl-2.6.3.jar to Apache Library Download jxl-2.6.3.jar from following link – http://www.findjar.com/jar/net.sourceforge.jexcelapi/jars/jxl-2.6.jar.html   Steps to add jxl.jar file in Local Machine Right Click on ImportxlsDemo > Project Properties > Libraries > Add jar/Directory and browse to directory where jxl-2.6.3.jar has been downloaded and select the JAR file            Steps to add jxl.jar file at EBS middle tier On your EBS middile tier copy jxl.jar at $FND_TOP/java/3rdparty/standalone Add $FND_TOP/java/3rdparty/standalone\jxl.jar to custom classpath in Jser.properties file which is at $IAS_ORACLE_HOME/Apache/Jserv/etc wrapper.classpath=/U01/oracle/dev/devappl/fnd/11.5.0/java/3rdparty/stdalone/jxl.jar Bounce Apache Server   3. Create a New Application Module (AM) Right Click on ImportxlsDemo > New > ADF Business Components > Application Module Name -- ImportxlsAM Package -- prajkumar.oracle.apps.fnd.importxlsdemo.server   Check Application Module Class: ImportxlsAMImpl Generate JavaFile(s)   4. Create Test Table in which we will insert data from excel CREATE TABLE xx_import_excel_data_demo (    -- --------------------      -- Data Columns      -- --------------------      column1                 VARCHAR2(100),      column2                 VARCHAR2(100),      column3                 VARCHAR2(100),      column4                 VARCHAR2(100),      column5                 VARCHAR2(100),      -- --------------------      -- Who Columns      -- --------------------      last_update_date   DATE         NOT NULL,      last_updated_by    NUMBER   NOT NULL,      creation_date         DATE         NOT NULL,      created_by             NUMBER    NOT NULL,      last_update_login  NUMBER );   5. Create a New Entity Object (EO) Right click on ImportxlsDemo > New > ADF Business Components > Entity Object Name – ImportxlsEO Package -- prajkumar.oracle.apps.fnd.importxlsdemo.schema.server Database Objects -- XX_IMPORT_EXCEL_DATA_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on ImportxlsDemo > New > ADF Business Components > View Object Name -- ImportxlsVO Package -- prajkumar.oracle.apps.fnd.importxlsdemo.server   In Step2 in Entity Page select ImportxlsEO and shuttle it to selected list In Step3 in Attributes Window select all columns and shuttle them to selected list   In Java page Uncheck Generate Java file for View Object Class: ImportxlsVOImpl Select Generate Java File for View Row Class: ImportxlsVORowImpl -> Generate Java File -> Accessors   7. Add Your View Object to Root UI Application Module Right click on ImportxlsAM > Edit ImportxlsAM > Data Model > Select ImportxlsVO and shuttle to Data Model list   8. Create a New Page Right click on ImportxlsDemo > New > Web Tier > OA Components > Page Name -- ImportxlsPG Package -- prajkumar.oracle.apps.fnd.importxlsdemo.webui   9. Select the ImportxlsPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties:   Attribute Property ID PageLayoutRN AM Definition prajkumar.oracle.apps.fnd.importxlsdemo.server.ImportxlsAM Window Title Import Data From Excel through OAF Page Demo Window Title Import Data From Excel through OAF Page Demo   11. Create messageComponentLayout Region Under Page Layout Region Right click PageLayoutRN > New > Region   Attribute Property ID MainRN Item Style messageComponentLayout   12. Create a New Item messageFileUpload Bean under MainRN Right click on MainRN > New > messageFileUpload Set Following Properties for New Item --   Attribute Property ID MessageFileUpload Item Style messageFileUpload   13. Create a New Item Submit Button Bean under MainRN Right click on MainRN > New > messageLayout Set Following Properties for messageLayout --   Attribute Property ID ButtonLayout   Right Click on ButtonLayout > New > Item   Attribute Property ID Go Item Style submitButton Attribute Set /oracle/apps/fnd/attributesets/Buttons/Go   14. Create Controller for page ImportxlsPG Right Click on PageLayoutRN > Set New Controller Package Name: prajkumar.oracle.apps.fnd.importxlsdemo.webui Class Name: ImportxlsCO   Write Following Code in ImportxlsCO in processFormRequest import oracle.apps.fnd.framework.OAApplicationModule; import oracle.apps.fnd.framework.OAException; import java.io.Serializable; import oracle.apps.fnd.framework.webui.OAControllerImpl; import oracle.apps.fnd.framework.webui.OAPageContext; import oracle.apps.fnd.framework.webui.beans.OAWebBean; import oracle.cabo.ui.data.DataObject; import oracle.jbo.domain.BlobDomain; public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processFormRequest(pageContext, webBean);  if (pageContext.getParameter("Go") != null)  {   DataObject fileUploadData = (DataObject)pageContext.getNamedDataObject("MessageFileUpload");   String fileName = null;                 try   {    fileName = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_NAME");   }   catch(NullPointerException ex)   {    throw new OAException("Please Select a File to Upload", OAException.ERROR);   }   BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, fileName);   try   {    OAApplicationModule oaapplicationmodule = pageContext.getRootApplicationModule();    Serializable aserializable2[] = {uploadedByteStream};    Class aclass2[] = {BlobDomain.class };    oaapplicationmodule.invokeMethod("ReadExcel", aserializable2,aclass2);   }   catch (Exception ex)   {    throw new OAException(ex.toString(), OAException.ERROR);   }  } }     Write Following Code in ImportxlsAMImpl.java import java.io.IOException; import java.io.InputStream; import jxl.Cell; import jxl.CellType; import jxl.Sheet; import jxl.Workbook; import jxl.read.biff.BiffException; import oracle.apps.fnd.framework.server.OAApplicationModuleImpl; import oracle.jbo.Row; import oracle.apps.fnd.framework.OAViewObject; import oracle.apps.fnd.framework.server.OAViewObjectImpl; import oracle.jbo.domain.BlobDomain; public void createRecord(String[] excel_data) {   OAViewObject vo = (OAViewObject)getImportxlsVO1();            if (!vo.isPreparedForExecution())    {   vo.executeQuery();      }                      Row row = vo.createRow();  try  {   for (int i=0; i < excel_data.length; i++)   {    row.setAttribute("Column" +(i+1) ,excel_data[i]);   }  }  catch(Exception e)  {   System.out.println(e.getMessage());   }  vo.insertRow(row);  getTransaction().commit(); }      public void ReadExcel(BlobDomain fileData) throws IOException {  String[] excel_data  = new String[5];  InputStream inputWorkbook = fileData.getInputStream();  Workbook w;          try  {   w = Workbook.getWorkbook(inputWorkbook);                       // Get the first sheet   Sheet sheet = w.getSheet(0);                       for (int i = 0; i < sheet.getRows(); i++)   {    for (int j = 0; j < sheet.getColumns(); j++)    {     Cell cell = sheet.getCell(j, i);     CellType type = cell.getType();     if (cell.getType() == CellType.LABEL)     {      System.out.println("I got a label " + cell.getContents());      excel_data[j] = cell.getContents();     }     if (cell.getType() == CellType.NUMBER)     {        System.out.println("I got a number " + cell.getContents());      excel_data[j] = cell.getContents();     }    }    createRecord(excel_data);   }  }              catch (BiffException e)  {   e.printStackTrace();  } }   15. Congratulation you have successfully finished. Run Your page and Test Your Work   Consider Excel PRAJ_TEST.xls with following data --       Lets Try to import this data into DB Table --          

    Read the article

  • Is the PowerPC (G4) architecture better than Intel x86 architecture for certain types of application

    - by ralphc
    I have a PowerMac G4 from around the year 2000. It's a serviceable Unix machine but I don't do much with it since I have plenty of Pentium 4 machines around with Linux on them. I was wondering if the PowerPC based machine is capable of handling certain tasks faster or "better" than Intel based machines, or is it just another computer to use. I have OSX 10.4 Tiger for it, and I don't mind installing Yellow Dog Linux or UbuntuPPC if that will make it more useable.

    Read the article

  • gparted installed on OpenSuse shows all file system types as greyed out except for hfs

    - by cmdematos.com
    I have had this problem before and fixed it, but I don't recall how I did it and I did not record it (sadness :( ) I have all the requisite commands installed on OpenSuse to support gparted's efforts in creating any of the supported file systems. I recall that the problem was that gparted could not find the commands, in any event all the file systems are greyed out in the context menu except for the legacy hfs partition which only supports < 2gb. Even extfs2-extfs4 are greyed out. How do I fix this?

    Read the article

  • Files missing after copying from one sdcard to another using Ubuntu 12.04

    - by Abhi Kandoi
    I have two Kingston sdcards 8GB and 32GB. The 8GB for HTC Sensation and 32GB for my Tablet (Chinese). I soon ran out of space on my phone and decided to swap the data between the sdcards. I copied the data of 32GB sdcard to my PC(with Ubuntu 12.04) deleted everything on the 32GB sdcard and copied data from the 8GB sdcard to the other sdcard. At last I copied the data from my PC to the 8GB sdcard(before this I deleted everything that was on it before). Now the problem is that the 8GB sdcard has the data exactly the same, but the 32GB sdcard has some missing data. All my photos(not copied or posted on Facebook) and Songs are lost. Please help as to what I can do to get my data back? I have Android Revolution HD (ICS 4.0.3) on my rooted HTC Sensation (India). I suspect it has something to do with my data loss.

    Read the article

  • gparted installed on OpenSuse shows all file system types as greyed out except for hfs

    - by cmdematos
    I have had this problem before and fixed it, but I don't recall how I did it and I did not record it (sadness :( ) I have all the requisite commands installed on OpenSuse to support gparted's efforts in creating any of the supported file systems. I recall that the problem was that gparted could not find the commands, in any event all the file systems are greyed out in the context menu except for the legacy hfs partition which only supports < 2gb. Even extfs2-extfs4 are greyed out. How do I fix this?

    Read the article

  • Preventing access to files if a user types the full url on the address bar

    - by bogha
    i have a website, some folders on the websites contains images and files like .pdf , .doc and .docx . the user can easly just type the address in the url to get the file or display the photo http://site/folder1/img/pic1.jpg then boom.. he can see the image or just download the file my question is: how to prevent this kind of action, how can i guarantee a secure access of the files. any suggestions UPDATE TO CLARIFY MY IDEA i don't want any user who is browsing the website to get access to these files normally by just writing the URL of the file. those files are a CV files, they are being uploaded by the users to a specific folder on the server which we host outside the company. those files are only being viewed by the HR people through a special system. that's the scenario we want. i don't want a WEB GEEK who just wants to see what files has been uploaded to this folder to download them easly to his/her computer and view them or publish them on the internet. i hope you got my idea

    Read the article

  • tar a directory and only include certain file types

    - by Susan
    Following the instructions here: http://linuxdevcenter.com/pub/a/linux/lpt/20_08.html I'm aiming to tar up a directory but only want to include .php files from that directory. Given the aforementioned instructions, I've come up with this command. It creates a file called IncludeTheseFiles which lists all the .php files, then the tar is supposed to do it's job only using the files listed in IncludeTheseFiles find myProjectDirectory -type f -print | \ egrep '(\.[php]|[Mm]akefile)$' > IncludeTheseFiles tar cvf myProjectTarName -I IncludeTheseFiles However, when I run this it doesn't like the I include option? tar: invalid option -- I

    Read the article

  • Custom SNMP Cacti Data Source fails to update

    - by Andrew Wilkinson
    I'm trying to create a custom SNMP datasource for Cacti but despite everything I can check being correct, it is not creating the rrd file, or updating it even when I create it. Other, standard SNMP sources are working correctly so it's not SNMP or permissions that are the problem. I've created a new Data Query, which when I click on "Verbose Query" on the device screen returns the following: + Running data query [10]. + Found type = '3' [SNMP Query]. + Found data query XML file at '/volume1/web/cacti/resource/snmp_queries/syno_volume_stats.xml' + XML file parsed ok. + missing in XML file, 'Index Count Changed' emulated by counting oid_index entries + Executing SNMP walk for list of indexes @ '.1.3.6.1.2.1.25.2.3.1.3' Index Count: 8 + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.1' value: 'Physical memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.3' value: 'Virtual memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.6' value: 'Memory buffers' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.7' value: 'Cached memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.10' value: 'Swap space' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.31' value: '/' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.32' value: '/volume1' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.33' value: '/opt' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.1' results: '1' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.3' results: '3' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.6' results: '6' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.7' results: '7' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.10' results: '10' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.31' results: '31' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.32' results: '32' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.33' results: '33' + Located input field 'index' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.3' + Found item [index='Physical memory'] index: 1 [from value] + Found item [index='Virtual memory'] index: 3 [from value] + Found item [index='Memory buffers'] index: 6 [from value] + Found item [index='Cached memory'] index: 7 [from value] + Found item [index='Swap space'] index: 10 [from value] + Found item [index='/'] index: 31 [from value] + Found item [index='/volume1'] index: 32 [from value] + Found item [index='/opt'] index: 33 [from value] + Located input field 'volsizeunit' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.4' + Found item [volsizeunit='1024 Bytes'] index: 1 [from value] + Found item [volsizeunit='1024 Bytes'] index: 3 [from value] + Found item [volsizeunit='1024 Bytes'] index: 6 [from value] + Found item [volsizeunit='1024 Bytes'] index: 7 [from value] + Found item [volsizeunit='1024 Bytes'] index: 10 [from value] + Found item [volsizeunit='4096 Bytes'] index: 31 [from value] + Found item [volsizeunit='4096 Bytes'] index: 32 [from value] + Found item [volsizeunit='4096 Bytes'] index: 33 [from value] + Located input field 'volsize' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.5' + Found item [volsize='1034712'] index: 1 [from value] + Found item [volsize='3131792'] index: 3 [from value] + Found item [volsize='1034712'] index: 6 [from value] + Found item [volsize='775904'] index: 7 [from value] + Found item [volsize='2097080'] index: 10 [from value] + Found item [volsize='612766'] index: 31 [from value] + Found item [volsize='1439812394'] index: 32 [from value] + Found item [volsize='1439812394'] index: 33 [from value] + Located input field 'volused' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.6' + Found item [volused='1022520'] index: 1 [from value] + Found item [volused='1024096'] index: 3 [from value] + Found item [volused='32408'] index: 6 [from value] + Found item [volused='775904'] index: 7 [from value] + Found item [volused='1576'] index: 10 [from value] + Found item [volused='148070'] index: 31 [from value] + Found item [volused='682377865'] index: 32 [from value] + Found item [volused='682377865'] index: 33 [from value] AS you can see it appears to be returning the correct data. I've also set up data templates and graph templates to display the data. The create graphs for a device screen shows the correct data, and when selecting one row can clicking create a new data source and graph are created. Unfortunately the data source is never updated. Increasing the poller log level shows that it appears to not even be querying the data source, despite it being used? What should my next steps to debug this issue be?

    Read the article

  • Setup kickstart boot for all installation media types (cd and usb-flash)

    - by Cucumber
    I created own custom CentOS iso. I used mkisofs make it. This is part of my isolinux.cfg file: label vesa menu label Install ^RAIDIX system kernel vmlinuz append initrd=initrd.img xdriver=vesa nomodeset text linux ks=cdrom:/isolinux/ks.cfg If I specify parameter ks=cdrom:/isolinux/ks.cfg my iso will boot only from cd or dvd-rom. If I specify parameter ks=hd:<device>:/ks.cfg my iso will boot only from usb-drive. Can I specify ks parameter to boot from both type of installation media?

    Read the article

  • OSX: Selecting default application for all unknown and different file types (extensions)

    - by Leo
    I work in cluster computing and am using Mac OS X 10.6. I send off hundreds of computing jobs a day, and each one comes back with with a different extension. For example, svmGeneSelect.o12345 which is the std output of my svmGeneSelect job which is job number 12345. I don't control the extensions. All files are plain text. I want OSX to open any file extension that it hasn't seen before with my favorite text editor when I click on it. Or even better set up file association defaults for extension patterns ie textEdit for extensions matching *.o*. I do NOT want to create file associations for individual files since this extension will only ever exist once, and I do not want to go through the process of selecting the application to use for each file. Thanks for any help you can offer.

    Read the article

  • Can't make SELinux context types permanent with semanage

    - by Safado
    I created a new folder at /modevasive to hold my mod_evasive scripts and for the Log Directory. I'm trying to change the context type to httpd_sys_content_t so Apache can write to the folder. I did semanage fcontext -a -t "httpd_sys_content_t" /modevasive to change the context and then restorecon -v /modevasive to enable the change, but restorecon didn't do anything. So I used chcon to change it manually, did the restorecon to see what would happen and it changed it back to default_t. semanage fcontext -l gives: /modevasive/ all files system_u:object_r:httpd_sys_content_t:s0` And looking at /etc/selinux/targeted/contexts/files/file_contexts.local gives /modevasive/ system_u:object_r:httpd_sys_content_t:s0 So why does restorecon keep setting it back to default_t?

    Read the article

  • Remote Desktop failed logon event 4625 not logging correctly on 2008 Terminal Services server

    - by Zone12
    When I use the new remote desktop with ssl and try to log on with bad credentials it logs a 4625 event as expected. The problem is, it doesn't log the ip address, so I can't block malicious logons in our firewall. The event looks like this: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{00000000-0000-0000-0000-000000000000}" /> <EventID>4625</EventID> <Version>0</Version> <Level>0</Level> <Task>12544</Task> <Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords> <TimeCreated SystemTime="2012-04-13T06:52:36.499113600Z" /> <EventRecordID>467553</EventRecordID> <Correlation /> <Execution ProcessID="544" ThreadID="596" /> <Channel>Security</Channel> <Computer>ontheinternet</Computer> <Security /> </System> <EventData> <Data Name="SubjectUserSid">S-1-0-0</Data> <Data Name="SubjectUserName">-</Data> <Data Name="SubjectDomainName">-</Data> <Data Name="SubjectLogonId">0x0</Data> <Data Name="TargetUserSid">S-1-0-0</Data> <Data Name="TargetUserName">notauser</Data> <Data Name="TargetDomainName">MYSERVER-PC</Data> <Data Name="Status">0xc000006d</Data> <Data Name="FailureReason">%%2313</Data> <Data Name="SubStatus">0xc0000064</Data> <Data Name="LogonType">3</Data> <Data Name="LogonProcessName">NtLmSsp</Data> <Data Name="AuthenticationPackageName">NTLM</Data> <Data Name="WorkstationName">MYSERVER-PC</Data> <Data Name="TransmittedServices">-</Data> <Data Name="LmPackageName">-</Data> <Data Name="KeyLength">0</Data> <Data Name="ProcessId">0x0</Data> <Data Name="ProcessName">-</Data> <Data Name="IpAddress">-</Data> <Data Name="IpPort">-</Data> </EventData> </Event> It seems because the logon type is 3 and not 10 like the old rdp sessions, the ip address and other information is not stored. The machine I am trying to connect from is on the internet and not on the same network as the server. Does anyone know where this information is stored (and what other events are generated with a failed logon)? Any help will be much appreciated.

    Read the article

  • Windows 7 explorer always crashes, opens small "Personalized Settings" window

    - by Ian Sellar
    My Windows 7 desktop PC, built by me, started acting very weird in the last couple of days. I use it quite often, about half of the time through TeamViewer. Explorer would crash and restart randomly, almost always through TeamViewer. This made me suspect that TeamViewer was the problem but I have reproduced it with and without TeamViewer several times. The only way I can seem to get the problem not to occur is by booting into Safe Mode. I have used CCleaner and Malwarebytes to make sure it wasn't a registry error or malware causing the problem, and I have tried the fix in the seemly related issue here as well every other fix I have found online including removing security updates KB980408 and KB2926765 as well as using "sfc /scannow" and a bunch of other things I can't remember. More recently when I try to start explorer it is popping up a small window that says "Personalized Settings" on the top, but is completely empty and crashes instantly. The only way I can get it to disappear is to kill the explorer.exe process. I wish I could take a screenshot but I can't seem to open paint or even find the exe. I have tried restarting it, I have tried starting it while the personalized settings window was open. I have come up with two lists of processes the first is the list of active processes when I boot into safe mode and explorer seems to work fine. The second is the list of processes that I can narrow it down to in normal boot and still replicate the problem. There is one process that I can't seem to close. NisSrv.exe which is describes as "Microsoft Network Realtime Inspection Service". When I try to close the process NisSrv.exe it says "The operation could not be completed. Access is denied." When I try to close the related service it gives the same message. Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 2,660 K smss.exe 304 Services 0 1,196 K csrss.exe 408 Services 0 4,156 K wininit.exe 444 Services 0 4,608 K csrss.exe 452 Console 1 8,700 K services.exe 492 Services 0 7,700 K winlogon.exe 524 Console 1 5,756 K lsass.exe 536 Services 0 10,644 K lsm.exe 544 Services 0 4,316 K svchost.exe 652 Services 0 8,976 K MsMpEng.exe 804 Services 0 40,696 K explorer.exe 1332 Console 1 85,220 K ctfmon.exe 1376 Console 1 3,680 K dllhost.exe 1624 Console 1 8,656 K chrome.exe 1408 Console 1 98,504 K WmiPrvSE.exe 2352 Services 0 6,472 K chrome.exe 1744 Console 1 65,116 K taskmgr.exe 372 Console 1 14,948 K cmd.exe 2776 Console 1 2,960 K conhost.exe 1816 Console 1 3,580 K tasklist.exe 2308 Console 1 5,868 K And the list of processes I have narrowed it down to. Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 2,808 K smss.exe 316 Services 0 1,216 K csrss.exe 484 Services 0 4,532 K wininit.exe 596 Services 0 4,604 K csrss.exe 604 Console 1 23,676 K services.exe 652 Services 0 11,344 K lsass.exe 668 Services 0 12,692 K lsm.exe 676 Services 0 4,464 K MsMpEng.exe 972 Services 0 68,436 K winlogon.exe 168 Console 1 7,784 K svchost.exe 496 Services 0 19,140 K NisSrv.exe 3176 Services 0 808 K svchost.exe 1684 Services 0 11,260 K taskmgr.exe 4524 Console 1 20,696 K cmd.exe 4764 Console 1 7,224 K conhost.exe 4772 Console 1 6,916 K sublime_text.exe 2340 Console 1 45,012 K dllhost.exe 4476 Console 1 8,736 K tasklist.exe 3796 Console 1 5,768 K WmiPrvSE.exe 1768 Services 0 6,344 K Here is the event data xml from event viewer for the error I am getting. <EventData> <Data>explorer.exe</Data> <Data>6.1.7601.17567</Data> <Data>4d672ee4</Data> <Data>vrfcore.dll</Data> <Data>6.3.9600.16384</Data> <Data>5215f8f5</Data> <Data>80000003</Data> <Data>0000000000003a00</Data> <Data>12e4</Data> <Data>01cfb84fa70f89dc</Data> <Data>C:\Windows\system32\explorer.exe</Data> <Data>C:\Windows\SYSTEM32\vrfcore.dll</Data> <Data>e5957093-2442-11e4-9f8a-94de806ed9cb</Data> </EventData> I was looking through the eventvwr log again and I found this, possibly related <EventData> <Data>runonce.exe</Data> <Data>6.1.7601.17514</Data> <Data>4ce7a253</Data> <Data>MSVCR100.dll</Data> <Data>10.0.40219.325</Data> <Data>4df2bcac</Data> <Data>c0000005</Data> <Data>000000000003c145</Data> <Data>670</Data> <Data>01cfb8dabbd85942</Data> <Data>C:\Windows\system32\runonce.exe</Data> <Data>C:\Windows\system32\MSVCR100.dll</Data> <Data>fa6f82b9-24cd-11e4-80a8-94de806ed9cb</Data> </EventData> And the general error details Faulting application name: Explorer.EXE, version: 6.1.7601.17567, time stamp: 0x4d672ee4 Faulting module name: vrfcore.dll, version: 6.3.9600.16384, time stamp: 0x5215f8f5 Exception code: 0x80000003 Fault offset: 0x0000000000003a00 Faulting process id: 0xc38 Faulting application start time: 0x01cfb84e5e852c5f Faulting application path: C:\Windows\Explorer.EXE Faulting module path: C:\Windows\SYSTEM32\vrfcore.dll Report Id: 9dc19e6d-2441-11e4-9f8a-94de806ed9cb Another probably unrelated error that I seem to be getting pretty often. Event filter with query "SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage > 99" could not be reactivated in namespace "//./root/CIMV2" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected. My explorer tab in Autoruns seen below along with the error when I try to uncheck something. I should add that I seem to be able to disable shell extensions with ShellExView but I still can't get explorer to start correctly. EXPLORER SHELL UPDATE - See screenshot below I can access the explorer right click menu through a file manager I downloaded called NexusFile, but still no luck starting explorer. Another round of errors that I am getting regarding Windows Search Service The search service has detected corrupted data files in the index {id=4700}. The service will attempt to automatically correct this problem by rebuilding the index. Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801) followed by The Windows Search Service is being stopped because there is a problem with the indexer: The catalog is corrupt. Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801 and The plug-in in <Search.JetPropStore> cannot be initialized. Context: Windows Application, SystemIndex Catalog Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801) and The gatherer object cannot be initialized. Context: Windows Application, SystemIndex Catalog Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801) and The Windows Search Service cannot load the property store information. Context: Windows Application, SystemIndex Catalog Details: The content index database is corrupt. (HRESULT : 0xc0041800) (0xc0041800) WER Log http://pastebin.com/WXKGDT4Q I'll add information as I remember it or people request it.

    Read the article

  • EFI Partition Types

    - by Will
    Another employee was working with a mounted VHD and was using diskpart on it and I'm not sure if he messed something up or not. I think it's fine but I wanted to double check. We have a Windows 2008 x64 machine that boots with EFI because of the large disks in the system. What should the different labels be within Disk Manager for this system? Here is what is currently set: 200 MB Partition - EFI System Partition C Partition - Boot, Page File, Crash Dump There should not be an active partition under EFI, correct? Also, is there supposed to be a "System" label in addition to "EFI System Partition"?

    Read the article

  • Backup data rate on Raspberry Pi maxing out at 5 Mb/s. Why?

    - by bastibe
    I set up my Raspberry Pi as a Time Machine, as documented here. At the moment, the Raspberry Pi is connected to my MacBook Pro using a direct Ethernet cable. Also, an external hard drive (laptop drive) is connected to the Raspberry Pi using the USB port. However, backups are pretty slow. Activity Monitor claims that the Network is transferring a very steady 5 Mb/s, where my Time Capsule is transferring up to 8 Mb/s with a lot of fluctuation. The Raspberry Pi self-reports (top) that its CPU is only half-used, with about equal parts afpd, usb-storage and jbd2/sda1-8. Thus, I think that the processing power of the Raspberry Pi does not seem to be the problem here. To me, this looks like there is some kind of bottleneck that maxes out at 5 Mb/s thus potentially having my backups run at less than their potential speed. To the best of my knowledge, this might be the afp-daemon, the usb-bus or the external hard drive. So, my question is, how could I identify the true culprit and what can I do about it?

    Read the article

  • Database types for customer analytics

    - by Drewdavid
    I am exploring a paid solution to start providing better embedded, dashboard-style analytics information to our website customers/account holders, but would like to also offer an in-house development option to our team. The more equipped I am with specifics (such as the subject of this question), the better the adoption rate from the team (or so I have found), regardless of the path we choose Would anyone care to summarize a couple of options for a fast and scalable database type through which we would provide the following: • Daily pageviews to a users account pages (users have between 1 and 1000 pages) • Some calculated/compounded metrics (such as conversion rate, i.e. certain page type viewed to contact form thank you page ratio) • We have about 1,500 members (will need room to grow); the number of concurrently logged in users will for the question's sake be 50 I ask because our developer has balked at providing this level of "over time" granularity (i.e. daily) due to the number of space it would take up in a MYSQL database To avoid a downvote I have asked specifically for more than one option, realizing that different people will have different solutions. I will make amendments to my question if so guided by answering parties Thank you for sharing your valued answers :)

    Read the article

  • BIND9 server types

    - by aGr
    I was configuring DNS on my server using BIND9, everything seems to work, but I have a question regarding my config file. I've ended up with this configuration in /etc/bind/named.conf.local zone "example.com" { type master; file "/etc/bind/db.example.com"; allow-transfer { 192.168.1.1; }; }; zone "1.168.192.in-addr.arpa" { type master; notify no; file "/etc/bind/db.192"; allow-transfer { 192.168.1.1; }; }; forwarders { 10.253.22.140; 10.253.22.141; }; I've read about the different type of dns server, like primary master etc. The first two parts (zone and zone) corresponds to primary dns server configuration. First record for "classic" lookup, second one for reverse. The last part (forwarders) is configuration of cache-server and contains the ISP's IP of DNS server. So all names resolved thanks to this server will be cached. Simple question: am I right? Does my description make sense? Or one server can be only either master or either cached?

    Read the article

  • for ps aux what are Ss Sl Ssl proccess types UNIX

    - by JiminyCricket
    when doing a "ps aux" command I get some process listed as Ss, Ssl and Sl what do these mean? root 24653 0.0 0.0 2256 8 ? Ss Apr12 0:00 /bin/bash -c /usr/bin/python /var/python/report_watchman.py root 24654 0.0 0.0 74412 88 ? Sl Apr12 0:01 /usr/bin/python /var/python/report_watchman.py root 21976 0.0 0.0 2256 8 ? Ss Apr14 0:00 /bin/bash -c /usr/bin/python /var/python/report_watchman.py root 21977 0.0 0.0 73628 88 ? Sl Apr14 0:01 /usr/bin/python /var/python/report_watchman.py

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >