Search Results

Search found 276 results on 12 pages for 'jeffrey blake'.

Page 3/12 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to automate a monitoring system for ETL runs

    - by Jeffrey McDaniel
    Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log and staretl.html files. These files will give the output results of the ETL run. The staretl.html file will give a detailed summary of each step of the process, its run time, and its status. The .log file, based on the logging level set in the Configuration tool, can give extensive information about the ETL process. The log file can be used as a validation for process completion.  To automate the monitoring of these log files, perform the following steps: 1. Write a custom application to parse through the log file and search for [ERROR] . In most cases,  a major [ERROR] could cause the ETL process to fail. Searching the log and finding this value is worthy of an alert. 2. Determine the total number of steps in the ETL process, and validate that the log file recorded and entry for the final step.  For example validate that your log file contains an entry for Step 39/39 (could be different based on the version you are running). If there is no Step 39/39, then either the process is taking longer than expected or it didn't make it to the end.  Either way this would be a good cause for an alert. 3. Check the last line in the log file. The last line of the log file should contain an indication that the ETL run completed successfully. For example, the last line of a log file will say (results could be different based on Reporting Database versions):   [INFO] (Message) Finished Writing Report 4. You could write an Ant script to execute the ETL process and have it set to - failonerror="true" - and from there send results to an external tool to monitor the jobs, send to email, or send to database. With each ETL run, the log file appends to the existing log file by default. Because of this behavior, I would recommend renaming the existing log files before running a new ETL process. By doing this,  only log entries for the currently running ETL process is recorded in the new log files. Based on these log entries, alerts can be setup to notify the administrator or DBA. Another way to determine if the ETL process has completed successfully is to monitor the etl_processmaster table.  Depending on the Reporting Database version this could be in the Stage or Star databases. As of Reporting Database 2.2 and higher this would be in the Star database.  The etl_processmaster table records entries for the ETL run along with a Start and Finish time.  If the ETl process has failed the Finish date should be null. This table can be queried at a time when ETL process is expected to be finished and if null send an alert.  These are just some options. There are additional ways this can be accomplished based around these two areas - log files or database. Here is an additional query to gather more information about your ETL run (connect as Staruser): SELECT SYSDATE,test_script,decode(loc, 0, PROCESSNAME, trim(SUBSTR(PROCESSNAME, loc+1))) PROCESSNAME ,duration duration from ( select (e.endtime - b.starttime) * 1440 duration, to_char(b.starttime, 'hh24:mi:ss') starttime, to_char(e.endtime, 'hh24:mi:ss') endtime,  b.PROCESSNAME, instr(b.PROCESSNAME, ']') loc, b.infotype test_script from ( select processid, infodate starttime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'BEGIN' ) b  inner Join ( select processid, infodate endtime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'END' ) e on b.processid = e.processid  and b.PROCESSNAME = e.PROCESSNAME order by b.starttime)

    Read the article

  • What functionality should I use in OpenGL 2.0?

    - by Jeffrey
    Considering OpenGL 2.1, we all know that glBegin and glEnd are the devil. Should I use only VBO to render 3d primitives (I can't find VAO in that version, weren't there already?)? Should I still use the matrix stack (why not?)? Should I still use glFrustum? Can I take advantage of shaders in GLSL 1.20? Where can I find a tutorial for VBO in OpenGL 2.1 and the "correct" way of programming in it? Also how am I supposed to animate something. Like a cube moving around an object or a player moving in the scene (static vbo data + shader?)? Note: Take your time to answer this question, I'll accept an answer tomorrow.

    Read the article

  • Collision planes confusion

    - by Jeffrey
    I'm following this tutorial by thecplusplusguy and in the linked video he explain that for example for the world basement and walls we need to create the actual rendered (shown to the player) walls and then duplicate them, place them in the same coordinates as the rendered walls and call them collision (by defining their material to collision). Then it defines in the Object loader function that those objects with material == collision are collision planes and should not be rendered but just used to check collision. Now I'm pretty confused. Why would we add this kind of complexity to a problem that can easily be solved by a simple loadObject(string plane_object, bool check_collision);: Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true In this case we have lowered the complexity of his method and make it more flexible and faster to develop (faster because we don't always have to make a copy for each plane and flexible because we don't hardcode the Object loader). The only case in which this method could not work is when we need hidden collision planes, and for that we could modify the loadObject() function like this: loadObject(string plane_object, bool check_collision = true, bool hide_object = false); Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true And add the ability to actually show the object or hide it based on hide_object. The final question is: am I right? What would the possible problem encountered with my solution versus his?

    Read the article

  • Removing mdadm array and converting to regular disks while preserving data

    - by Jeffrey Kevin Pry
    I have a 6 disk (2TB each) mdadm RAID 5 volume created in Ubuntu 12.04 Server. However, I'm moving to a different solution and want to "unraid" my disks but keep the data. Only 50% is in use. From what I can surmise I basically have to do this recursively for each physical disk. Fail the disk Format the failed disk Move a portion of files to the new disk. Reshape the array Shrink the logical volume md0 This seems like a very time consuming process. Is there an easier way to do this (automatically perhaps) without buying new disks to temporarily hold the data? I am also aware that during this processing my RAID volume will be degraded and vulnerable the entire time. I am not too concerned about this and will be using battery backup and moving the most important files off first. Thank you for your help!

    Read the article

  • Partitioning Strategies for P6 Reporting Database

    - by Jeffrey McDaniel
    Prior to P6 Reporting Database version 3.2 sp1 range partitioning was used. This was applied only to the history tables. The ranges were defined during installation and additional ranges would need to be added once your date range entered the final defined range. As of P6 Reporting Database version 3.2 sp1, interval partitioning was implemented. Interval partitioning was applied to the existing History table as well as Slowly Changing Dimension tables. One of the major advantages of interval partitioning is there is no more manual addition of ranges. The interval partitioning will automatically create partitions for the defined interval when data is inserted into the table and it exceeds the existing partitions. In 3.2 sp1 there are steps on how to update your partitioning. For all versions after 3.2 sp1 interval partitioning is the only partitioning option used. When upgrading it is important to be aware of these changes. Here is a link with more information on partitioning -the types and the advantages. http://docs.oracle.com/cd/E11882_01/server.112/e25523/partition.htm

    Read the article

  • Example of DOD design

    - by Jeffrey
    I can't seem to find a nice explanation of the Data Oriented Design for a generic zombie game (it's just an example, pretty common example). Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombie list class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie } Or would creating a generic World container that contains every action from bite(zombieId, playerId) to moveTo(playerId, vector) to createPlayer() to shoot(playerId, vector) to face(radians)/face(vector); and contains: std::vector<zombie> std::vector<player> ... std::vector<mapchunk> ... std::vector<vbobufferid> player_run_animation; ... be a good example? Whats the proper way to organize a game with DOD?

    Read the article

  • Demo of Contract Lifecycle Management at OpenWorld 2012

    - by jeffrey.waterman
    Here is information for the demo station around CLM at OpenWorld 2012.  Be sure to check the main OpenWorld page for updates. Demo Stations Located in Moscone West 72 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle E-Business Suite Advanced Procurement Purchasing and Services Procurement iProcurement Contract Lifecycle Management for Public Sector Booth W-122 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Announcement: Federal Financial Briefing

    - by jeffrey.waterman
    Dear Oracle/PeopleSoft Federal Financial Management Customers: Oracle is pleased to announce that we will conduct the next Federal Financial Management Briefing on Tuesday, April 17th from 8:30 am until 2:00 pm at the Oracle Campus in Reston, Virginia. The Registration Link and Agenda can be found at the web site below: Federal Financial Briefing: Register Here Directions to Oracle Reston: From the Beltway take the Dulles Toll Road (Route 267 West). Do not get on the Dulles Access Road or you will not be able to exit until you get to the airport. Take the Reston Parkway Exit (Exit 12). At the end of the exit ramp, turn right onto Reston Parkway. Make your first right onto Sunset Hills Road. Take a Right turn onto Oracle Way and park in Visitor Parking. The receptionist will direct you to the CAB.

    Read the article

  • Plan for your OpenWorld experience

    - by jeffrey.waterman
    Here is a partial list of the events which will take place at Oracle OpenWorld 2012, please take time out of your conference activities to get to these important, and informative, events: Attend the Sessions: General Session: Public Sector Wednesday, 3 October 10:15 a.m. – 11:15 a.m. Westin San Francisco Market Street – Metropolitan III Oracle Exadata, Oracle Exalogic, Oracle Exalytics, and Big Data Solutions in the Public Sector Wednesday, 3 October 11:45 a.m. – 12:45 p.m. Westin San Francisco Market Street – City Room Best Practices in the Use of Middleware for Information Sharing Across Agencies Wednesday, 3 October 1:15 p.m. – 2:15 p.m. Westin San Francisco Market Street – City Room Upgrading PeopleSoft Applications in the Public Sector Wednesday, 3 October 1:15 p.m. – 2:15 p.m. Westin San Francisco Market Street – Franciscan I Shared Services in Public Sector Organizations Wednesday, 3 October 3:30 p.m. – 4:30 p.m. Westin San Francisco Market Street – City Room Achieving Agility Through Closed-Loop Oracle Policy Automation Wednesday, 3 October 5:00 p.m. – 6:00 p.m. Westin San Francisco Market Street – Franciscan I The Value of Oracle E-Business Suite in the Public Sector Wednesday, 3 October 5:00 p.m. – 6:00 p.m. Westin San Francisco Market Street – City Ballroom Public Sector Reception Monday, 1 October 6:30pm – 9:30 pm Jillian’s, 101 Fourth Street

    Read the article

  • Use of list-unsubscribe to improve inbox delivery

    - by Jeffrey Simon
    To overcome email being classified as spam by Gmail, Google recommends a number of steps, which we have implemented (namely SPF, DKIM, and Precedence: bulk). One additional measure they recommend at https://support.google.com/mail/bin/answer.py?hl=en&answer=81126#authentication reads as follows: Because Gmail can help users automatically unsubscribe from your email, we strongly recommend the following: Provide a 'List-Unsubscribe' header which points to an email address where the user can unsubscribe easily from future mailings (Note: This is not a substitute method for unsubscribing). Documentation for List-Unsubscribe is found at http://www.list-unsubscribe.com/. From this documentation I expect a button to be provided by a supported mail client. I have tested the 'List-Unsubscribe' header and it does not appear to provide the button. I have tested in both Gmail and OS X Mail. I tested with an http address and with both an email address and an http address. The format of the header is as follows: List-Unsubscribe: <mailto:[email protected]>, <http://domain.com/member/unsubscribe/[email protected]?id=12345N> No button appears in any test. My questions: How widely is List-Unsubscribe supported? Should a button be appearing somewhere, or does something else have to be present? I have seen a comment that even if the button is not present, services like Gmail, Yahoo, Hotmail/Windows Live would give higher regard to email having the header. Thus it might be worthwhile for this aspect alone. Please note that our standard email footer already contacts instructions and a link to allow unsubscribing from our email. Finally, is it worth while to implement this header? (That is, any downsides?)

    Read the article

  • How activity id affects calculations such as schedule % complete when using a baseline?

    - by Jeffrey McDaniel
    Fields such as schedule % complete, planned value costs, etc. that use a baseline to help determine the value depend on the activity id's to match between the baseline project and the current project. If the activity id is changed the link is broken. In the P6 power client there is an internal guid that allows you to change the activity id in either the baseline or current project and still have these values related. In the P6 Reporting Database the activity id is used as the joining characteristic between which activities are a match between a baseline project and a current project.

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

  • Where are my date ranges in Analytics coming from?

    - by Jeffrey McDaniel
    In the P6 Reporting Database there are two main tables to consider when viewing time - W_DAY_D and W_Calendar_FS.  W_DAY_D is populated internally during the ETL process and will provide a row for every day in the given time range. Each row will contain aspects of that day such as calendar year, month, week, quarter, etc. to allow it to be used in the time element when creating requests in Analytics to group data into these time granularities. W_Calendar_FS is used for calculations such as spreads, but is also based on the same set date range. The min and max day_dt (W_DAY_D) and daydate (W_Calendar_FS) will be related to the date range defined, which is a start date and a rolling interval plus a certain range. Generally start date plus 3 years.  In P6 Reporting Database 2.0 this date range was defined in the Configuration utility.  As of P6 Reporting Database 3.0, with the introduction of the Extended Schema this date range is set in the P6 web application. The Extended Schema uses this date range to calculate the data for near real time reporting in P6.  This same date range is validated and used for the P6 Reporting Database.  The rolling date range means if today is April 1, 2010 and the rolling interval is set to three years, the min date will be 1/1/2010 and the max date will be 4/1/2013.  1/1/2010 will be the min date because we always back fill to the beginning of the year. On April 2nd, the Extended schema services are run and the date range is adjusted there to move the max date forward to 4/2/2013.  When the ETL process is run the Reporting Database will pick up this change and also adjust the max date on the W_DAY_D and W_Calendar_FS. There are scenarios where date ranges affecting areas like resource limit may not be adjusted until a change occurs to cause a recalculation, but based on general system usage these dates in these tables will progress forward with the rolling intervals. Choosing a large date range can have an effect on the ETL process for the P6 Reporting Database. The extract portion of the process will pull spread data over into the STAR. The date range defines how long activity and resource assignment spread data is spread out in these tables. If an activity lasts 5 days it will have 5 days of spread data. If a project lasts 5 years, and the date range is 3 years the spread data after that 3 year date range will be bucketed into the last day in the date range. For the overall project and even the activity level you will still see the correct total values.  You just would not be able to see the daily spread 5 years from now. This is an important question when choosing your date range, do you really need to see spread data down to the day 5 years in the future?  Generally this amount of granularity years in the future is not needed. Remember all those values 5, 10, 15, 20 years in the future are still available to report on they would be in more of a summary format on the activity or project.  The data is always there, the level of granularity is the decision.

    Read the article

  • How are vertex shader outs sent as inputs to the fragment shader?

    - by Jeffrey
    I'm learning some OpenGL 3.2 way of doing things and I think it's quite great, I'm actually understanding more of shaders and non-fixed pipeline in 1 week rather than those 2 years I tried to learn OpenGL fixed pipeline functions. But here's my question: From what I think I've understood the vertex shader is run for each vertexes in the VBO. But the fragments shader is run per each pixel (is that right?) which is a huge number compared to let's say 3 vertexes of a triangle. Now it seems that in the vertex shader the out variables (like colors and stuff) are passed 1 to 1 to the fragment shader. But let's say that I pass to the fragment shader the position of the vertex in the vertex shader. How is all executed? What vertex (A, B or C of the hipothetical triangle) is passed per each fragment and why?

    Read the article

  • my mouse blinks!

    - by Jeffrey
    hello I'm new in linux systems and I have the same problem in every distro... the thing is that my mouse every time i move it or do anything with it blinks!!! I think it's a xorg problem or a driver problem and the only way it does not do that is when i start with the low-resolution error using a xorg.conf from another graphic chipset and i think that isn't the best choise xDD please if you can help me I have a ati rage 128, thanks a lot for your help people n.n

    Read the article

  • Nvidia Powermizer Performance Levels

    - by jeffrey
    Is there anyway to configure Nvidia Powerimizer performance levels? My current setup has 3 power levels with the lowest one being 50mhz. The problem with this it that it lags compiz when it goes to the lowest performance level 0. Minimizing, maximizing, dragging windows, etc. are all sluggish when it's at the lowest level. Once powermizer leaves level 0 everything is very smooth and runs great. Is there anyway for me to remove level 0 and just run Level the two higher levels 1/2? I don't want to complete disable powermizer, but I can't stand the lagging once powermizer goes into performance level 0. Setting the option "prefer maximum performance" fixes the problem as it disables powermizer, but the GPU is overkill at stock speeds for most desktop use @ 850mhz. intel i5 2500k asus gene-z z68 evga 560ti fpb (driver 295.40) ubuntu 12.04 LTS x64

    Read the article

  • "Unable to associated Elastic IP with cluster" in Eclipse Plugin Tutorial

    - by Jeffrey Chee
    Hi all, I am currently trying to evaluate AWS for my company and was trying to follow the tutorials on the web. http://developer.amazonwebservices.com/connect/entry.jspa?externalID=2241 However I get the below error during startup of the server instance: Unable to associated Elastic IP with cluster: Unable to detect that the Elastic IP was orrectly associated. java.lang.Exception: Unable to detect that the Elastic IP was correctly associated at com.amazonaws.ec2.cluster.Cluster.associateElasticIp(Cluster.java:802) at com.amazonaws.ec2.cluster.Cluster.start(Cluster.java:311) at com.amazonaws.eclipse.wtp.ElasticClusterBehavior.launch(ElasticClusterBehavior.java:611) at com.amazonaws.eclipse.wtp.Ec2LaunchConfigurationDelegate.launch(Ec2LaunchConfigurationDelegate.java:47) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:853) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:703) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:696) at org.eclipse.wst.server.core.internal.Server.startImpl2(Server.java:3051) at org.eclipse.wst.server.core.internal.Server.startImpl(Server.java:3001) at org.eclipse.wst.server.core.internal.Server$StartJob.run(Server.java:300) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Then after a while, another error occur: Unable to publish server configuration files: Unable to copy remote file after trying 4 timeslocal file: 'XXXXXXXX/XXX.zip' Results from first attempt: Unexpected exception: java.net.ConnectException: Connection timed out: connect root cause: java.net.ConnectException: Connection timed out: connect at com.amazonaws.eclipse.ec2.RemoteCommandUtils.copyRemoteFile(RemoteCommandUtils.java:128) at com.amazonaws.eclipse.wtp.tomcat.Ec2TomcatServer.publishServerConfiguration(Ec2TomcatServer.java:172) at com.amazonaws.ec2.cluster.Cluster.publishServerConfiguration(Cluster.java:369) at com.amazonaws.eclipse.wtp.ElasticClusterBehavior.publishServer(ElasticClusterBehavior.java:538) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:866) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:708) at org.eclipse.wst.server.core.internal.Server.publishImpl(Server.java:2731) at org.eclipse.wst.server.core.internal.Server$PublishJob.run(Server.java:278) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Can anyone point me to what I'm doing wrong? I followed the tutorials and the video tutorials on youtube exactly. Best Regards ~Jeffrey

    Read the article

  • Canonical representation of a class object containing a list element in XML

    - by dendini
    I see that most implementations of JAX-RS represent a class object containing a list of elements as follows (assume a class House containing a list of People) <houses> <house> <person> <name>Adam</name> </person> <person> <name>Blake</name> </person> </house> <house> </house> </houses> The result above is obtained for instance from Jersey 2 JAX-RS implementation, notice Jersey creates a wrapper class "houses" around each house, however strangely it doesn't create a wrapper class around each person! I don't feel this is a correct mapping of a list, in other words I'd feel more confortable with something like this: <houses> <house> <persons> <person> <name>Adam</name> </person> <person> <name>Blake</name> </person> </persons> </house> <house> </house> </houses> Is there any document explaining how an object should be correctly mapped apart from any opninion?

    Read the article

  • Ubuntu 13.10 No Sound

    - by spiersie
    I was running 13.04 since last monday and just today i upgraded to 13.10, in both of these version i have not managed to get my sound working. I have gone into alsamixer and disabled auto mute and the volumes are up. However if somebody thinks they can help me fix this i will gladly follow any steps. Please lay specifically any terminal commands you need me to do to either show specs or solve the problem as i am not fluent with the linux commands, this desktop being my first system to run linux, starting last monday. blake@Blake-Ubuntu-PC:~$ lspci -v | grep -A7 -i "audio" 00:01.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Trinity HDMI Audio Controller Subsystem: ASUSTeK Computer Inc. Device 8526 Flags: bus master, fast devsel, latency 0, IRQ 53 Memory at fef44000 (32-bit, non-prefetchable) [size=16K] Capabilities: Kernel driver in use: snd_hda_intel 00:10.0 USB controller: Advanced Micro Devices, Inc. [AMD] FCH USB XHCI Controller (rev 03) (prog-if 30 [XHCI]) 00:14.2 Audio device: Advanced Micro Devices, Inc. [AMD] FCH Azalia Controller (rev 01) Subsystem: ASUSTeK Computer Inc. Device 8445 Flags: bus master, slow devsel, latency 32, IRQ 16 Memory at fef40000 (64-bit, non-prefetchable) [size=16K] Capabilities: Kernel driver in use: snd_hda_intel 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 11)

    Read the article

  • Mounting a share with spaces in FreeBSD fstab

    - by Blake
    I'm trying to mount a smb network share in fstab on FreeBSD, which works fine for a share without spaces, but fails if a space is in the name. I have replaced the space with \040 which is what everything on google has said, but that didn't help. Share name I'm trying to mount is "Data Backups". Share name as written in fstab that doesn't work: //USERNAME@COMPUTER/Data\040Backups Any suggestions?

    Read the article

  • Exim forwards not going out through TLS

    - by Blake
    I'm trying to get Exim to use STARTTLS to send emails that are just FORWARDS. I have a server accepting email at example-accepting.com for users. So I want [email protected] to forward all email to [email protected]. If I do this from the command like on example-accepting.com... echo "test" | mail -s "ssl/tls test" [email protected] Success!! Sent via TLS BUT, if I send an email to [email protected] the forward fails, it's NOT being sent via TLS. I've tried both forwarding the email via /etc/aliases and the user .forward file. The email is indeed sent, but NOT via TLS. Why is it when I run "mail" from the command like it's working like it should, but a .forward is not using TLS? Thanks

    Read the article

  • route lan traffic through wirless mifi

    - by Randall Blake
    I have a Windows 7 laptop accessing the internet through Verizon wireless MIFI configured as 192.168.1.1. It supports only 5 wireless connections, so I don't want to use up connections unnecessarily. That laptop has an ethernet nic which I have given a static IP of 192.168.0.5. Everything else on the 192.168.0.0 network acquires an address via DHCP from a DLink router whose address is 192.168.0.1. Also on that network are a printer, some network cameras, and a linux pc. The linux pc does not have a wireless card (and I don't want to buy one). The linux pc is located at 192.168.0.122. I can ping the linux pc from the windows pc. But I cannot access the internet from the linux pc. I can ping everything on the 192.168.0.0 network EXCEPT the ethernet card in the Windows PC. It seems as though my DLink router will not route requests to the 192.168.0.5 nic on the windows pc. My windows pc has a default route pointing to the 192.168.1.0 network. It also has a route telling it to route all traffic destined for the 192.168.0.0 network through the 192.168.0.5 interface. I have tried adding a default route to the linux pc to "gateway" 192.168.0.5, but that does not work. I have also tried adding a default route to the linux pc to the gateway 192.168.0.1 (the DLink router) but that will not give me internet access either (over the 192.168.1.0 network). I tried these two different routes at different times - I did not set them both at the same time. I suppose this is a simple problem to solve, but I cannot seem to solve it. How can I give internet access over the 192.168.1.1 MIFI to my linux pc on 192.168.0.122? Thanks EDIT: Additional Info Internet | | MIFI (192.168.1.1) (wireless) | | (192.168.1.3) (wireless) Windows 7 PC Dlink Router (192.168.0.1) ------------ (192.168.0.5) (wired) | | |linux pc (192.168.0.122) (wired) | |printer (192.168.0.100) (wireless) | |network cameras, etc (192.168.0.103) (wireless) Only the windows pc is multi-homed with a wireless nic that connects to the MIFI wirelessly, and an ethernet nic with a wired connection to the DLink router. (The DLink permits both wired and wireless connections.) I don't want to use Windows internet connection sharing because I believe it will set up the ethernet nic as a gateway on 192.168.0.1 and a DHCP server. I already have the Dlink performing that role and I don't want to change that if I do not have to. (The Dlink permits me to make DHCP reservations and I really like that feature. I don't want to lose it.)

    Read the article

  • SQL Server 2000 msdb database loading/suspect

    - by Blake Parcell
    My SQL Server recently suffered a raid controller/hard drive crash. After getting my hard drive problem corrected I soon found that some of my databases were (suspect) namely msdb. I am not a DBA by any means however am somewhat familiar with the daily SQL activities that happen on my server. So I restored from backup, and tried to bring my msdb database online. It is now forever stuck in (Loading\Suspect) and I am unable to script backups for my important databases. I can recreate all of the backup plans etc if i can somehow get a working msdb. Any help would be greatly appreciated. I am currently using: Microsoft SQL Server 2000 Version: 8.00.194

    Read the article

  • Active Directory Partition Error

    - by BLAKE
    Right now my active directory is failing a dcdiag test. I can find no info online about this error. When I run dcdiag /test:crossrefvalidation, I get the output: .... Doing primary tests Testing server: Default-First-Site-Name\ad01 Running partition tests on : ForestDnsZones Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Running partition tests on : DomainDnsZones Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Running partition tests on : mydomain Starting test: CrossRefValidation ......................... mydomain passed test CrossRefValidation Running partition tests on : t Starting test: CrossRefValidation This cross-ref has a non-standard dNSRoot attribute. Cross-ref DN: CN=a3a24d3a-4782-460b-9148-86ac2d86b9ae,CN=Partitions,CN=Configuration, DC=mydomain,DC=com nCName attribute (Partition name): DC=t Bad dNSRoot attribute: dc01.mydomain.com Check with your network administrator to make sure this dNSRoot attribute is correct, and if not please change the attribute to the value below. dNSRoot should be: t It appears this partition (DC=t) failed to get completely created. This cross-ref (CN=a3a24d3a-4782-460b-9148-86ac2d86b9ae,CN=Partitions,CN=Configurat ion,DC=mydomain,DC=com) is dead and should be removed from the Active Directory. ......................... t failed test CrossRefValidation .... I used LDP from the windows support tools. I searched for the dnsRoot attribute in "cn=partitions,cn=configuration,dc=mydomain,dc=com", with the filter "(&(objectcategory=crossref)(systemFlags:1.2.840.113556.1.4.803:=5))" I got the result: ***Searching... ldap_search_s(ld, "cn=partitions,CN=Configuration,DC=mydomain,DC=com", 1, "(& (objectcategory=crossref)(systemFlags:1.2.840.113556.1.4.803:=5))", attrList, 0, &msg) Result <0>: (null) Matched DNs: Getting 3 entries: >> Dn: CN=65502be3-fc90-442a-83d8-4b3b91e82439,CN=Partitions,CN=Configuration,DC=mydomain,DC=com 1> dnsRoot: ForestDnsZones.mydomain.com; >> Dn: CN=a3a24d3a-4782-460b-9148-86ac2d86b9ae,CN=Partitions,CN=Configuration,DC=mydomain,DC=com 1> dnsRoot: ad01.mydomain.com; >> Dn: CN=f0ef5771-6225-4984-acd9-c08f582eb4e2,CN=Partitions,CN=Configuration,DC=mydomain,DC=com 1> dnsRoot: DomainDnsZones.mydomain.com; It looks like the bad partition has the name of my first domain controller 'ad01.mydomain.com'. I have googled for a while and have not been able to find any help or documentation about application partitions in Active Directory. Does anyone have any advice on how to cleanup this partition (or what the partition is for)? Does anyone know the repercussions for deleting this partition?

    Read the article

  • Ho do you view all your monitoring software

    - by BLAKE
    In my office I have all our monitoring tools setup and working great, but I dont have an easy way to view them. I have a large TV in my office plugged into an old PC that has my nagios status page always showing. If I need to change anything on that computer I use Synergy to access the computer from my desktop. We are thinking about adding another TV and I want some suggestions on setting it all up. We are a 99.99% Windows shop. What do you use to run all the TV's in your helpdesk? Synergy works for me, but what if one of the other admins want to change the screen? Is there any easy way that any of us (currently 4 people) can change the screen from our desktops? (Remote desktop doesn't work because it locks the console which is the output to the TVs.) Any advice would help, Thanks.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >