Search Results

Search found 7249 results on 290 pages for 'executive messages'.

Page 210/290 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • What are the arguments against parsing the Cthulhu way?

    - by smarmy53
    I have been assigned the task of implementing a Domain Specific Language for a tool that may become quite important for the company. The language is simple but not trivial, it already allows nested loops, string concatenation, etc. and it is practically sure that other constructs will be added as the project advances. I know by experience that writing a lexer/parser by hand -unless the grammar is trivial- is a time consuming and error prone process. So I was left with two options: a parser generator à la yacc or a combinator library like Parsec. The former was good as well but I picked the latter for various reasons, and implemented the solution in a functional language. The result is pretty spectacular to my eyes, the code is very concise, elegant and readable/fluent. I concede it may look a bit weird if you never programmed in anything other than java/c#, but then this would be true of anything not written in java/c#. At some point however, I've been literally attacked by a co-worker. After a quick glance at my screen he declared that the code is uncomprehensible and that I should not reinvent parsing but just use a stack and String.Split like everybody does. He made a lot of noise, and I could not convince him, partially because I've been taken by surprise and had no clear explanation, partially because his opinion was immutable (no pun intended). I even offered to explain him the language, but to no avail. I'm positive the discussion is going to re-surface in front of management, so I'm preparing some solid arguments. These are the first few reasons that come to my mind to avoid a String.Split-based solution: you need lot of ifs to handle special cases and things quickly spiral out of control lots of hardcoded array indexes makes maintenance painful extremely difficult to handle things like a function call as a method argument (ex. add( (add a, b), c) very difficult to provide meaningful error messages in case of syntax errors (very likely to happen) I'm all for simplicity, clarity and avoiding unnecessary smart-cryptic stuff, but I also believe it's a mistake to dumb down every part of the codebase so that even a burger flipper can understand it. It's the same argument I hear for not using interfaces, not adopting separation of concerns, copying-pasting code around, etc. A minimum of technical competence and willingness to learn is required to work on a software project after all. (I won't use this argument as it will probably sound offensive, and starting a war is not going to help anybody) What are your favorite arguments against parsing the Cthulhu way?* *of course if you can convince me he's right I'll be perfectly happy as well

    Read the article

  • External hard drive not recognized

    - by sr71
    Installed Ubuntu 10.10 on a hard drive by itself (no windows). All updates are up to date. Everything`works fine except when I plug in a 320 gig Toshiba external hard drive....it is not recognized. When I plug in an 8 gig flash drive it is recognized no problem. What do I mean by "not recognized"? I mean: how do you know it was not recognized. You can check the command "cat /proc/partitions" in a terminal with and without the hdd attached. If you see some difference, it's ok. If the difference is something like "sda1" (sd+letter+number) then you have partition on it, maybe it can't be handled in Ubuntu (no filesystem on it or so). If there is only "sda" in the difference for example (sd+letter, but no number after it) then the drive itself is detected, just no partition is created on it. Also you can check out the messages of the kernel, with the "dmesg" command in the terminal. If there is no disk/partition/anything in /proc/partitions, there can be an USB level problem, you can issue command "lsusb" as it was suggested before my answer. It's really matter of what do you mean about "not recognized". @Marco Ceppi @Oli @Pitto By "not recognized" I meant that when I plug in a 8 gig flash drive an icon immediately appears on the desktop imdicating that there is a flash drive plugged in and I can click on it and view the files on the flash drive. It also shows up on the "Nautilus" default file manager. When I plug in the 320 gig Toshiba external hard drive, I get no indication on the desktop or the file manager (thus "not recogized"). When I run the command cat/proc/partitions/ I get an error message Could not open location 'file:///home/bob/cat/proc/partitions' with or without the external hard drive installed. With the dmeg command I get about 10 pages of info with no mention of disk/partition/anything in /proc/partitions. When I run lsusb command I get the following: bob@bob-desktop:~$ lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 003: ID 04b3:3003 IBM Corp. Rapid Access III Keyboard Bus 003 Device 002: ID 04b3:3004 IBM Corp. Media Access Pro Keyboard Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • Enterprise Service Bus (ESB): Important architectural piece to a SOA or is it just vendor hype?

    Is an Enterprise Service Bus (ESB) an important architectural piece to a Service-Oriented Architecture (SOA), or is it just vendor hype in order to sell a particular product such as SOA-in-a-box? According to IBM.com, an ESB is a flexible connectivity infrastructure for integrating applications and services; it offers a flexible and manageable approach to service-oriented architecture implementation. With this being said, it is my personal belief that ESBs are an important architectural piece to any SOA. Additionally, generic design patterns have been created around the integration of web services in to ESB regardless of any vendor. ESB design patterns, according to Philip Hartman, can be classified in to the following categories: Interaction Patterns: Enable service interaction points to send and/or receive messages from the bus Mediation Patterns: Enable the altering of message exchanges Deployment Patterns: Support solution deployment into a federated infrastructure Examples of Interaction Patterns: One-Way Message Synchronous Interaction Asynchronous Interaction Asynchronous Interaction with Timeout Asynchronous Interaction with a Notification Timer One Request, Multiple Responses One Request, One of Two Possible Responses One Request, a Mandatory Response, and an Optional Response Partial Processing Multiple Application Interactions Benefits of the Mediation Pattern: Mediator promotes loose coupling by keeping objects from referring to each other explicitly, and it lets you vary their interaction independently Design an intermediary to decouple many peers Promote the many-to-many relationships between interacting peers to “full object status” Examples of Interaction Patterns: Global ESB: Services share a single namespace and all service providers are visible to every service requester across an entire network Directly Connected ESB: Global service registry that enables independent ESB installations to be visible Brokered ESB: Bridges services that are reluctant to expose requesters or providers to ESBs in other domains Federated ESB: Service consumers and providers connect to the master or to a dependent ESB to access services throughout the network References: Mediator Design Pattern. (2011). Retrieved 2011, from SourceMaking.com: http://sourcemaking.com/design_patterns/mediator Hartman, P. (2006, 24 1). ESB Patterns that "Click". Retrieved 2011, from The Art and Science of Being an IT Architect: http://artsciita.blogspot.com/2006/01/esb-patterns-that-click.html IBM. (2011). WebSphere DataPower XC10 Appliance Version 2.0. Retrieved 2011, from IBM.com: http://publib.boulder.ibm.com/infocenter/wdpxc/v2r0/index.jsp?topic=%2Fcom.ibm.websphere.help.glossary.doc%2Ftopics%2Fglossary.html Oracle. (2005). 12 Interaction Patterns. Retrieved 2011, from Oracle® BPEL Process Manager Developer's Guide: http://docs.oracle.com/cd/B31017_01/integrate.1013/b28981/interact.htm#BABHHEHD

    Read the article

  • What to do when opensource project starts to tear apart? (or a manager tries to write code and than shouts at the team)

    - by Kabumbus
    Imagine there is an open source cross-platform project on Google code. It has lots of revisions (1000). It concentrates in itself lots technological stuff - rare stuff - it mixes top tech. It contains server, and more than one client. The project was created by a well-connected team of developers (friends) and a manager that was sponsoring project at its start up during its first few months (project now is more than a year old-sponsoring oss project is a big good deal- also gave the idea of project to developers). The project was growing in complexity and effort reqiered to continue development. Once upon a time a manager - team leader started trying to write code (he was a programmer in some other projects - not the best, but he felt like he was one). He started because one of the developers suggested an idea at the team meeting and he felt he just needed to do it on his own. He failed, and he told the dev team about it. The dev team did what he failed to do in a few days. After that, the manager feels that team codes with out him perfectly and gets the job done in short time. He felt sorry and lost and he started to crash like an old bad PC. Firstly, he started to scream (in forms of messages not in voice) he tried to tell developers that what they were doing was a bad, not-needed thing - developers kindly told him that his "beginnings" were not compilable while dev team product worked as needed. He told the developers that all work they do should be firstly discussed with him. Here is the part where we need to mention that all team members are "project owners" and logically have equal rights. The team leader suggested to the developers these options: change their dev process to go through him, or be moved from project owners to contributers. So what are our options as developers? What arguments we can provide to the team leader/manager for him to calm down? Is it possible to save the project or is it better to fork out now? An important issue is that lately we had no active ticket system, and I personally think that this was the reason the mess appeared. So... any ideas?

    Read the article

  • How do I stop icons appearing on the desktop under conky?

    - by Seamus
    When I download something to my desktop, or insert a CD or flash drive, the icon appears on my desktop. When I have conky running, the icon sometimes appears in the top right corner, underneath conky; where I can't see it. How do I stop this happening? My .conkyrc is pasted below. I didn't write it all myself, so I'm not entirely sure what I need to change, or what parts are relevant for this particular question... # UBUNTU-CONKY # A comprehensive conky script, configured for use on # Ubuntu / Debian Gnome, without the need for any external scripts. # # Based on conky-jc and the default .conkyrc. # INCLUDES: # - tail of /var/log/messages # - netstat shows number of connections from your computer and application/PID making it. Kill spyware! # # -- Pengo # # Create own window instead of using desktop (required in nautilus) own_window yes own_window_type override own_window_transparent yes own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager # Use double buffering (reduces flicker, may not work for everyone) double_buffer yes # fiddle with window use_spacer right # Use Xft? use_xft yes xftfont DejaVu Sans:size=8 xftalpha 0.8 text_buffer_size 2048 # Update interval in seconds update_interval 3.0 # Minimum size of text area # minimum_size 250 5 # Draw shades? draw_shades no # Text stuff draw_outline no # amplifies text if yes draw_borders no uppercase no # set to yes if you want all text to be in uppercase # Stippled borders? stippled_borders 3 # border margins border_margin 9 # border width border_width 10 # Default colors and also border colors, grey90 == #e5e5e5 default_color grey own_window_colour brown own_window_transparent yes # Text alignment, other possible values are commented #alignment top_left alignment top_right #alignment bottom_left #alignment bottom_right # Gap between borders of screen and text gap_x 10 gap_y 20 # stuff after 'TEXT' will be formatted on screen TEXT $color ${color orange}SYSTEM ${hr 2}$color $nodename $sysname $kernel on $machine ${color orange}CPU ${hr 2}$color ${freq}MHz Load: ${loadavg} Temp: ${acpitemp} $cpubar ${cpugraph 000000 ffffff} NAME ${goto 150}PID ${goto 200}CPU% ${goto 250}MEM% ${top name 1} ${goto 150}${top pid 1} ${goto 200}${top cpu 1} ${goto 250}${top mem 1} ${top name 2} ${goto 150}${top pid 2} ${goto 200}${top cpu 2} ${goto 250}${top mem 2} ${top name 3} ${goto 150}${top pid 3} ${goto 200}${top cpu 3} ${goto 250}${top mem 3} ${top name 4} ${goto 150}${top pid 4} ${goto 200}${top cpu 4} ${goto 250}${top mem 4} ${color orange}MEMORY / DISK ${hr 2}$color RAM: $memperc% ${membar 6}$color Swap: $swapperc% ${swapbar 6}$color Home: ${fs_free_perc /home}% ${fs_bar 6 /}$color Free Space: ${fs_free /home} ${color orange}NETWORK (${addr eth0}) ${hr 2}$color Down: $color${downspeed eth0} k/s ${alignr}Up: ${upspeed eth0} k/s ${downspeedgraph eth0 25,140 000000 ff0000} ${alignr}${upspeedgraph eth0 25,140 000000 00ff00}$color Total: ${totaldown eth0} ${alignr}Total: ${totalup eth0} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr} ${color orange}WIRELESS (${addr wlan0}) ${hr 2}$color Down: $color${downspeed wlan0} k/s ${alignr}Up: ${upspeed wlan0} k/s ${downspeedgraph wlan0 25,140 000000 ff0000} ${alignr}${upspeedgraph wlan0 25,140 000000 00ff00}$color Total: ${totaldown wlan0} ${alignr}Total: ${totalup wlan0} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr}

    Read the article

  • What's Bringing SharePoint 2007 Server to a hault?

    - by juanlarios
    I've been having issues with my teste environment and I'm hoping someone has run into this problem and can point me in the right direction. I noticed: SharePoint Server Memory is through the roof at times and so is the CPU usage. Most of CPU usage is a sql proccess. Running out of disk space all the time. I looked in the Logs located in the 12 hive and sure enough I have 1G log files that are hard to open because of the size. The following are the 3 error messages that are flooding my SharePoint logs:   04/05/2010 16:02:36.99     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Variations Propagate Page Job Definition', id '{F9A73EB4-90FE-4574-AD99-B4034056F915}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.    04/05/2010 15:59:51.51     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Profile Synchronization', id '{A05E3439-8DCD-449A-9D9E-46D601CACAA2}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.     04/05/2010 15:56:25.53     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Scheduled Unpublish', id '{6298F93F-388D-46B9-809E-CEDBB8659661}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.     04/05/2010 15:54:14.73     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Config Refresh', id '{C42DA970-3DA3-4AA2-94E5-8499C5B80A3E}' for service '{7F6D2CBE-8071-4A30-B313-7C9989FC2D87}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.       I'm googling around but haven't found much. I know one other person posted something about this back in 2008, but no answers were reached. I have already checked the databases to see if any of them have gone offline for whatever reason, but from SQL everything is fine. I recently re-created an SSP and deleted an old ssp. So I thought maybe that was causing it, and who knows? maybe that causes some of the problems or maybe all. I'm running configuration wizard and see if anything changes. Please if someone has had similar issues let me know.

    Read the article

  • Now Shipping! NetAdvantage for .NET 2010 Volume 3!

    The new NetAdvantage Ultimate includes all four Line of Business user interface control sets for ASP .NET, Windows Forms, WPF and Silverlight plus two advanced Data Visualization UI control sets for WPF and Silverlight. With six NetAdvantage products in one robust package, Infragistics® gives you hundreds of controls and infinite development possibilities. Unified XAML Product Strategy-Share Code, Get More Controls In the 10.3 release, Infragistics continues to deliver code parity between the XAML platforms, WPF and Silverlight. In the line of business toolsets, Infragistics introduces the new xamSchedule™, full-featured, Outlook® 2010-style schedule controls, and the new xamDataTree™, a data bound tree view that comfortably handles tens of thousands of tree nodes. Mimicking our Silverlight Drag and Drop Framework, the WPF Drag and Drop Framework CTP empowers you to add your own rich touches to your applications. Track Users' Behaviors New to all NetAdvantage Silverlight controls is the Infragistics Analytics Framework (IGAF), which empowers you to track user behavior in RIAs running on Silverlight 4. Building on the Microsoft® Silverlight Analytics Framework, with IGAF you can analyze the user's behaviors to ensure the experience you want to deliver. NetAdvantage for Windows Forms--New Office® 2010 Ribbon and Application Menu 2010 Create new experiences with Windows Forms. Now with Office 2010 styling, NetAdvantage for Windows Forms has new features such as Microsoft® Office 2010 ribbon and enhanced Infragistics.Excel to export the contents of the high performance WinGrid™ into Microsoft Excel® 2010. The new Windows Message Support enables Infragistics standalone editor controls to process numerous Windows® OS messages, allowing them to respond just like native controls to changes in the Windows environment. Create Faster Web 2.0 Experiences with NetAdvantage for ASP .NET Infragistics continues to push the envelope to deliver the fastest ASP .NET WebForms controls available on the market. Our lightning fast ASP .NET grids are now enhanced with XPS/PDF Exporting and Summary Rows. This release also includes support for jQuery Templating (as a CTP) within our WebDataGrid™ and WebDataTree™ controls allowing you to quickly cut down overall page size. Deliver Business Intelligence with Power, Flexibility and the Office 2010 Experience NetAdvantage for WPF Data Visualization and NetAdvantage for Silverlight Data Visualization help you deliver flexible, powerful and usable end user experiences in Business Intelligence applications. Both suites include the Pivot Grid that delivers the full power of online analytical processing (OLAP) to present multi-dimensional data, sliced and diced in cross-tabulated form for end users to drill down into, interact with and easily extract meaning from the data. Mapping Made Easy 10.3 marks the official release of the WPF Data Visualization xamMap™ control to map anything and everything from geographic to geo-spacial mapping data. Map layers allow you to add successive levels of detail, navigational panes for panning in all directions, color swatch panes that facilitate value scales like Choropleth shading, and scale panes allowing users to zoom-in and out. Both toolsets introduce the first of many relationship maps! With the xamOrgChart™ CTP you can map out organizational charts of up to 50K employees, competitive brackets (think World Cup) and any other relational, organizational map your application needs. http://www.infragistics.com span.fullpost {display:none;}

    Read the article

  • What is a good strategy for binding view objects to model objects in C++?

    - by B.J.
    Imagine I have a rich data model that is represented by a hierarchy of objects. I also have a view hierarchy with views that can extract required data from model objects and display the data (and allow the user to manipulate the data). Actually, there could be multiple view hierarchies that can represent and manipulate the model (e.g. an overview-detail view and a direct manipulation view). My current approach for this is for the controller layer to store a reference to the underlying model object in the View object. The view object can then get the current data from the model for display, and can send the model object messages to update the data. View objects are effectively observers of the model objects and the model objects broadcast notifications when properties change. This approach allows all the views to update simultaneously when any view changes the model. Implemented carefully, this all works. However, it does require a lot of work to ensure that no view or model objects hold any stale references to model objects. The user can delete model objects or sub-hierarchies of the model at any time. Ensuring that all the view objects that hold references to the model objects that have been deleted is time-consuming and difficult. It feels like the approach I have been taking is not especially clean; while I don't want to have to have explicit code in the controller layer for mediating the communication between the views and the model, it seems like there must be a better (implicit) approach for establishing bindings between the view and the model and between related model objects. In particular, I am looking for an approach (in C++) that understands two key points: There is a many to one relationship between view and model objects If the underlying model object is destroyed, all the dependent view objects must be cleaned up so that no stale references exist While shared_ptr and weak_ptr can be used to manage the lifetimes of the underlying model objects and allows for weak references from the view to the model, they don't provide for notification of the destruction of the underlying object (they do in the sense that the use of a stale weak_ptr allows for notification), but I need an approach that notifies the dependent objects that their weak reference is going away. Can anyone suggest a good strategy to manage this?

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5

    - by Shub Lahiri, A-Team
    Background SOA Suite for Healthcare Integration pack comes with a pre-installed simulator that can be used as an external endpoint to generate inbound and outbound HL7 traffic on specified MLLP ports. This is a command-line utility that can be very handy when trying to build a complete end-to-end demo within a standalone, closed environment. The ant-based utility accepts the name of a configuration file as the command-line input argument. The format of this configuration file has changed between PS4 and PS5. In PS4, the configuration file was XML based and in PS5, it is name-value property based. The rest of this note highlights these differences and provides samples that can be used to run the first scenario from the product samples set. PS4 - Configuration File The sample configuration file for PS4 is shown below. The configuration file contains information about the following items: Directory for incoming and outgoing files for the host running SOA Suite Healthcare Polling Interval for the directory External Endpoint Logical Names External Endpoint Server Host Name and Ports Message throughput to be simulated for generating outbound messages Documents to be handled by different endpoints A copy of this file can be downloaded from here. PS5 - Configuration File The corresponding sample configuration file for PS5 is shown below. The configuration file contains similar information about the sample scenario but is not in XML format. It has name-value pairs specified in the form of a properties file. This sample file can be downloaded from here. Simulator Configuration Before running the simulator, the environment has to be set by defining the proper ANT_HOME and JAVA_HOME. The following extract is taken from a working sample shell script to set the environment: Also, as a part of setting the environment, template jndi.properties and logging.properties can be generated by using the following ant command: ant -f ant-b2bsimulator-util.xml b2bsimulator-prop Sample jndi.properties and logging.properties are shown below and can be modified, as needed. The jndi.properties contains information about connectivity to the local Weblogic Managed Server instance and the logging.properties file controls the amount of logging that can be generated from the running simulator process. Simulator Usage - Start and Stop The command syntax to launch the simulator via ant is the same in PS4 and PS5. Only the appropriate configuration file has to be supplied as the command-line argument, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstart -Dargs="simulator1.hl7-config.xml" This will start the simulator and will keep running to provide an active external endpoint for SOA Healthcare Integration engine. To stop the simulator, a similar ant command can be used, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstop

    Read the article

  • Why does my ID3DXSprite appear to be incorrectly scaled?

    - by Bjoern
    I am using D3D9 for rendering some simple things (a movie) as the backmost layer, then on top of that some text messages, and now wanted to add some buttons to that. Before adding the buttons everything seemed to have worked fine, and I was using a ID3DXSprite for the text as well (ID3DXFont), now I am loading some graphics for the buttons, but they seem to be scaled to something like 1.2 times their original size. In my test window I centered the graphic, but it being too big it just doesnt fit well, for example the client area is 640x360, the graphic is 440, so I expect 100 pixel on left and right, left side is fine [I took screenshot and "counted" the pixels in photoshop], but on the right there is only about 20 pixels) My rendering code is very simple (I am omitting error checks, et cetera, for brevity) // initially viewport was set to width/height of client area // clear device m_d3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET|D3DCLEAR_STENCIL|D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB(0,0,0,0), 1.0f, 0 ); // begin scene m_d3dDevice->BeginScene(); // render movie surface (just two triangles to which the movie is rendered) m_d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,false); m_d3dDevice->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetSamplerState( 0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE ); m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE ); //Ignored m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 ); m_d3dDevice->SetTexture( 0, m_movieTexture ); m_d3dDevice->SetStreamSource(0, m_displayPlaneVertexBuffer, 0, sizeof(Vertex)); m_d3dDevice->SetFVF(Vertex::FVF_Flags); m_d3dDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 2); // render sprites m_sprite->Begin(D3DXSPRITE_ALPHABLEND | D3DXSPRITE_SORT_TEXTURE | D3DXSPRITE_DO_NOT_ADDREF_TEXTURE); // text drop shadow m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRectDropShadow, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorDropShadow ); // text m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRect, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorMessage ) ); // control object m_sprite->Draw( m_texture, 0, 0, &m_vecPos, 0xFFFFFFFF ); // draws a few objects like this m_sprite->End() // end scene m_d3dDevice->EndScene(); What did I forget to do here? Except for the control objects (play button, pause button etc which are placed on a "panel" which is about 440 pixels wide) everything seems fine, the objects are positioned where I expect them, but just too big. By the way I loaded the images using D3DXCreateTextureFromFileEx (resizing wnidow, and reacting to lost device, etc, works fine too). For experimenting, I added some code to take an identity matrix and scale is down on the x/y axis to 0.75f, which then gave me the expected result for the controls (but also made the text smaller and out of position), but I don't know why I would need to scale anything. My rendering code is so simple, I just wanted to draw my 2D objects 1;1 the size they came from the file... I am really very inexperienced in D3D, so the answer might be very simple...

    Read the article

  • Ubuntu 10.10 not recognizing external hard drive

    - by sr71
    Installed Ubuntu 10.10 on a hard drive by itself (no windows). All updates are up to date. Everything`works fine except when I plug in a 320 gig Toshiba external hard drive....it is not recognized. When I plug in an 8 gig flash drive it is recognized no problem. What do I mean by "not recognized"? I mean: how do you know it was not recognized. You can check the command "cat /proc/partitions" in a terminal with and without the hdd attached. If you see some difference, it's ok. If the difference is something like "sda1" (sd+letter+number) then you have partition on it, maybe it can't be handled in Ubuntu (no filesystem on it or so). If there is only "sda" in the difference for example (sd+letter, but no number after it) then the drive itself is detected, just no partition is created on it. Also you can check out the messages of the kernel, with the "dmesg" command in the terminal. If there is no disk/partition/anything in /proc/partitions, there can be an USB level problem, you can issue command "lsusb" as it was suggested before my answer. It's really matter of what do you mean about "not recognized". @Marco Ceppi @Oli @Pitto By "not recognized" I meant that when I plug in a 8 gig flash drive an icon immediately appears on the desktop imdicating that there is a flash drive plugged in and I can click on it and view the files on the flash drive. It also shows up on the "Nautilus" default file manager. When I plug in the 320 gig Toshiba external hard drive, I get no indication on the desktop or the file manager (thus "not recogized"). When I run the command "cat/proc/partitions/" I get an error message Could not open location 'file:///home/bob/cat/proc/partitions' with or without the external hard drive installed. With the "dmeg" command I get about 10 pages of info with no mention of disk/partition/anything in /proc/partitions. When I run "lsub" command I get the following: bob@bob-desktop:~$ lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 003: ID 04b3:3003 IBM Corp. Rapid Access III Keyboard Bus 003 Device 002: ID 04b3:3004 IBM Corp. Media Access Pro Keyboard Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub bob@bob-desktop:~$

    Read the article

  • Rendering My Fault Message

    - by onefloridacoder
    My coworkers setup  a nice way to get the fault messages from our service layer all the way back to the client’s service proxy layer.  This is what I needed to work on this afternoon.  Through a series of trials and errors I finally figured this out. The confusion was how I was looking at the exception in the quick watch viewer.  It appeared as though the EventArgs from the service call had somehow magically been cast back to FaultException(), not FaultException<T>.  I drilled into the EventArgs object with the quick watch and I copied this code to figure out where the Fault message was hiding.  Further, when I copied this quick watch code into the IDE I got squigglies.  Poop. 1: ((System 2: .ServiceModel 3: .FaultException<UnhandledExceptionFault>)(((System.Exception)(e.Result)).InnerException)).Detail.FaultMessage I wont bore you with the details but here’s how it turned out.   EventArgs which I’m calling “e” is not the result such as some collection of items you’re expecting.  It’s actually a FaultException, or in my case FaultException<T>.  Below is the calling code and the callback to handle the expected response or the fault from the completed event. 1: public void BeginRetrieveItems(Action<ObservableCollection<Model.Widget>> FindItemsCompleteCallback, Model.WidgetLocation location) 2: { 3: var proxy = new MyServiceContractClient(); 4:  5: proxy.RetrieveWidgetsCompleted += (s, e) => FindWidgetsCompleteCallback(FindWidgetsCompleted(e)); 6:  7: RetrieveWidgetsRequest request = new RetrieveWidgetsRequest { location.Id }; 8:  9: proxy.RetrieveWidgetsAsync(request); 10: } 11:  12: private ObservableCollection<Model.Widget> FindItemsCompleted(RetrieveWidgetsCompletedEventArgs e) 13: { 14: if (e.Error is FaultException<UnhandledExceptionFault>) 15: { 16: var fault = (FaultException<UnhandledExceptionFault>)e.Error; 17: var faultDetailMessage = fault.Detail.FaultMessage; 18:  19: UIMessageControlDuJour.Show(faultDetailMessage); 20: return new ObservableCollection<BinInventoryItemCountInfo>(); 21: } 22:  23: var widgets = new ObservableCollection<Model.Widget>(); 24:  25: if (e.Result.Widgets != null) 26: { 27: e.Result.Widgets.ToList().ForEach(w => widgets.Add(this.WidgetMapper.Map(w))); 28: } 29:  30: return widgets; 31: }

    Read the article

  • How to Add a Business Card Image to a Signature in Outlook 2013 Without the vCard (.vcf) File

    - by Lori Kaufman
    When you add a business card to a signature, an image of the business card is inserted into the signature and the vCard (.vcf) file is attached. If you don’t want to attach the vCard file, you can insert the image only into your signature. To insert only the image of your business card without the .vcf file, click People on the Navigation Bar at the bottom of the Outlook window. To get a business card image we can use, we must view the contacts in any form other than People, so we can open the full contact editing window. To do this, click on a different view in the Current View section of the Home tab. We chose to view our contacts in the Business Card format. Double-click on your contact in the current view. The full contact editing window displays with an image of the business card on the right. Right-click on the business card image and select Copy Image from the popup menu. To close the contact editing window, click the File tab and click Close in the menu list on the left. NOTE: You can also click the X in the upper, right corner of the contact editing window to close it. To open the signature editor, click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. On the Mail screen, click Signatures in the Compose messages section. NOTE: You can also access the Signatures and Stationery dialog box from the Message window for new emails and drafts. Click New Email on the Home tab or double-click an email in the Drafts folder to access the Message window. For more information, see our article about assigning a default signature. In the signature editor, right-click and select Paste from the popup menu. The image is inserted into the signature. You can also use this method to copy a business card image for use in other documents and programs. It’s also possible to insert the vCard (.vcf) file into a signature without the image. We’ll cover that topic tomorrow.     

    Read the article

  • Help identify the pattern for reacting on updates

    - by Mike
    There's an entity that gets updated from external sources. Update events are at random intervals. And the entity has to be processed once updated. Multiple updates may be multiplexed. In other words there's a need for the most current state of entity to be processed. There's a point of no-return during processing where the current state (and the state is consistent i.e. no partial update is made) of entity is saved somewhere else and processing goes on independently of any arriving updates. Every consequent set of updates has to trigger processing i.e. system should not forget about updates. And for each entity there should be no more than one running processing (before the point of no-return) i.e. the entity state should not be processed more than once. So what I'm looking for is a pattern to cancel current processing before the point of no return or abandon processing results if an update arrives. The main challenge is to minimize race conditions and maintain integrity. The entity sits mainly in database with some files on disk. And the system is in .NET with web-services and message queues. What comes to my mind is a database queue-like table. An arriving update inserts row in that table and the processing is launched. The processing gathers necessary data before the point of no-return and once it reaches this barrier it looks into the queue table and checks whether there're more recent updates for the entity. If there are new updates the processing simply shuts down and its data is discarded. Otherwise the processing data is persisted and it goes beyond the point of no-return. Though it looks like a solution to me it is not quite elegant and I believe this scenario may be supported by some sort of middleware. If I would use message queues for this then there's a need to access the queue API in the point of no-return to check for the existence of new messages. And this approach also lacks elegance. Is there a name for this pattern and an existing solution?

    Read the article

  • Setting MTU on Exalogic

    - by csoto
    For many reasons, a system administrator may want to change the MTU settings of a server. But in a system like Exalogic which contains lots of interconnected nodes and other various components, it's important to understand how this applies to the different networks. For example, when bringing up bonding of InfiniBand an error like the following may be thrown: Bringing up interface bond1: SIOCSIFMTU: Invalid argument Both scripts ifcfg-ib0 and ifcfg-ib1 (from the /etc/sysconfig/network-scripts/ direectory) have MTU set to 65500, which is a valid MTU value only if all IPoIB slaves operate in connected mode and are configured with the same value, so the line below must be added to both network scripts and then restart the network: CONNECTED_MODE=yes By the way, an error of the form “SIOCSIFMTU: Invalid argument” indicates that the requested MTU was rejected by the kernel. Typically this would be due to it exceeding the maximum value supported by the interface hardware. In that case you must either reduce the MTU to a value that is supported or obtain more capable hardware. This problem has been seen when trying to modify the MTU using the ifconfig command, like the output of the example below: [root@elxxcnxx ~]# ifconfig ib1 mtu 65520 SIOCSIFMTU: Invalid argument It's important to insist that in most cases the nodes must be rebooted after the MTU size has been changed. Although in some circumstances it may work without a reboot, it is not how it is typically documented. Now, in order to achieve a reduced memory consumption and improve performance for network traffic received on IPoIB related interfaces, it is recommend to reduce the MTU value in interface configuration files for IPoIB related bonds from 65520 to 64000. The change needs to be made to interface configuration files under the /etc/sysconfig/network-scripts directory and applies to the interface configuration files for bonds over IPoIB related slave devices, for example /etc/sysconfig/network-scripts/ifcfg-bond1. However, keep in mind that the numeric portion of the interface filenames that corresponding to IPoIB interfaces is expected to vary across compute nodes and vServers and so cannot be relied upon to identify which interface files are for bonds are over IPoIB rather than EoIB related slave interfaces. To fix these MTU values to the recommended settings, there are very useful instructions and a script on the MOS Note 1624434.1, and it's applicable physical and virtual configurations of Exalogic. Regarding the recommended MTU value for EoIB related interfaces, its maximum appropriate value is 1500. If for some reason a vServer has been created with a higher value (set on the /etc/sysconfig/network-scripts/ifcfg-bond0 file), then it must be fixed. An error like the following could be thrown under this circumstance: [root@vServer ~]# service network restart ... Bringing up interface bond0:  SIOCSIFMTU: Invalid argument Also an error like the one below can be seen on the /var/log/messages file of the vServer: kernel: T5074835532 [mlx4_vnic] eth1:vnic_change_mtu:360: failed: new_mtu 64000 2026 The MOS Note 1611657.1 is very useful for this purpose.

    Read the article

  • Documenting sp_ssiscatalog

    - by jamiet
    What is the best way to document an API? Moreover, what is the best way to document a T-SQL API? Before I try to answer those questions I should explain what I mean by “a T-SQL API”. I think of an API as being a collection of well-defined, known, code modules that provide some notion of a service to whomever uses it; in T-SQL terms I tend to think of a collection of stored procedures and functions as a form of API. Its a loose definition, I admit, and in SQL Server circles we don’t tend to think of stored procedures collectively as an API but if you think about it that’s exactly what they are. The question of how to document a T-SQL API came to my mind as I worked on sp_ssiscatalog. How could I make it easy for people to learn about the capabilities of sp_ssiscatalog without forcing them to dig through the code and find out for themselves? My opening gambit was to write documentation pages on the wiki at http://ssisreportingpack.codeplex.com. That’s kinda useful but it does suffer the disadvantage that someone using sp_ssiscatalog needs to go visit a webpage to read it – I want the documentation to be available wherever the user is using sp_ssiscatalog. Moreover, maintaining the wiki is a real PITA. Intellisense works up to a point, I guess: but that only shows whatever SQL Server knows about the various parameters, which isn’t all that much! I wanted a better way for my API users to learn about its capabilities and so I hit upon the idea of simply using PRINT statements within the code itself to inform the user what options are available; hence I added such PRINT statements in the latest check-in. Now when you execute (for example): EXEC sp_ssiscatalog @operation_type='execs' you can hit F6 a few times to view the messages pane and you shall see something like this: Notice that I’m returning information about all the parameters that can be used to affect the results that just got returned. I really do think this will be very useful to anyone using sp_ssiscatalog; I myself am always forgetting what the parameters are and I wrote the damn thing so I can’t really expect anyone else to remember them. I have not yet made available a release that has these changes in it but when I do I’ll blog about it right here. At the time of writing the latest available release of sp_ssiscatalog is DB v1.0.1.0 but if you want to the latest and greatest simply download it straight from source. Feedback is welcome as always. @Jamiet

    Read the article

  • Powerful Lessons in Data from the Presidential Election

    - by Christina McKeon
    Now that we’ve had a few days to recover from the U.S. presidential election, it’s a good time to take a step back from politics and look for the customer experience lessons that we can take away. The most powerful lesson is that when you know more about your base, you will have an advantage over your competition. That advantage will translate into you winning and your competition losing. Michael Scherer of TIME was given access to Obama’s data analysts two days before the election. His account is documented in Inside the Secret World of the Data Crunchers Who Helped Obama Win. What we learned from Scherer’s inside view is how well Obama’s team did in getting the right data, analyzing it, and acting on it. This data team recognized how critical it was to break down data silos within the campaign. As Scherer noted, they created “a single system that merged information from pollsters, fundraisers, field workers, consumer databases, and social-media and mobile contacts with the main Democratic voter files in the swing states.” The Obama analysis was so meticulous that they knew which celebrity and which type of celebrity event would help them maximize campaign contributions. With a single system, their data models became more precise. They determined which messages were more successful with specific demographic groups and that who made the calls mattered. Data analysis also led to many other changes in Obama’s campaign including a new ad buying strategy, using social media and applications to tap into supporters’ friends, and using new social news sites. While we did not have that same inside view into Romney’s campaign, much of the post-mortem coverage indicates that Romney’s team did not have the right analysis. As Peter Hamby of CNN wrote in Analysis: Why Romney Lost, “Romney officials had modeled an electorate that looked something like a mix of 2004 and 2008….” That historical data did not account for the changing demographics in the U.S. Does your organization approach data like the Obama or Romney team? Do you really know your base? How well can you predict what is going to happen in your business? If you haven’t already put together a strategy and plan to know more, this week’s civics lesson is a powerful reason to do it sooner rather than later. Your competitors are probably thinking the same thing that you are!

    Read the article

  • Moms on Mobile: Are They Way Ahead of You?

    - by Mike Stiles
    You may have no idea how much and how fast moms are embracing mobile. Of all the demographics that can be targeted by marketers, moms have always been at or near the top of the list. And why not? They’re running households, they’re all over town, they’re making buying decisions, and they’re influencing family and friends. They, out of necessity, become masters of efficiency and time management. So when a technology tool, like mobile, comes along that assists with that efficiency and time management, we would obviously expect them to take advantage of it. So if it’s obvious, why are so many big, sophisticated brands left choking on the dust of moms who have zoomed past them in the adoption of mobile, and social on mobile? Let’s break down some hard truths as presented by a Mojiava report: -Moms spend 6.1 hours per day on average on their smartphones – more than magazines, TV or radio. -46% took action after seeing a mobile ad. -51% self-identify as “addicted” to their smartphone. -Households with an income of $25K-$50K have about the same mobile penetration among moms as those with incomes of $50K-$75K. So mobile is regarded as a necessity for middle-class moms. -Even moms without smartphones spend 2.5 hours on average per day on some connected mobile device. -Of moms with such devices, 9.8% have an iPad, 9.5% a Kindle and 5.7% an iPod Touch. -Of tablet-owning moms, 97% bought something using their tablet in the last month. -31% spend over 10 hours per week on their tablet, but less than 2 hours per week on their PCs. -62% of connected moms use shopping apps. -46% want to get info on their mobile while in a store. -Half of connected moms use social on their mobile. And they’re engaged. 81% are brand fans, 86% post updates, and 84% comment. If women and moms are one of your primary targets and you find yourself with no strong social channels where content is driving engagement and relationship-building, with sites not optimized for mobile, or with no tablet or smartphone apps, you have been solidly left behind by your customers and prospects. And their adoption of mobile and social on mobile is only exponentially speeding up, not slowing down. How much sense does it make when your customer is ready to act on your mobile ad, wants to user your iPad app to buy something from you, wants to be your fan on Facebook, wants to get messages and deals from you while they’re in your store…but you’re completely absent? I’ll help you cheat on the test by giving you the answer…no sense at all. Catch up to momma.

    Read the article

  • What are different ways to reduce latency between a server and a web application? [closed]

    - by jjoensuu
    this is a question about a web application that provides SOAP web services. For the sake of this question, this web app is hosted on a server SERVER B which is located in California. We have an automated, scheduled, process running on a server SERVER A, located in New York. This scheduled process is supposed to send SOAP messages to SERVER B every so often, but this process typically dies soon after starting. We have now been told by the vendor that reason why the process dies is because of the latency between SERVER A and SERVER B. The data traffic is routed through many diffent public networks. There is no dedicated line between SERVER A and SERVER B. As a result I have been asked to look into ways to reduce the latency between SERVER A and SERVER B. So I wanted to ask, what are the different ways to reduce latency in a situation like this? For example, would it help to switch from HTTPS to some other secure protocol? (the thought here is that perhaps some other alternative would require fewer handshakes than HTTPS). Or would a VPN help? If a VPN would reduce the latency, how would it do that? NOTE: I am not looking for an explicit answer that would work in my specific situation. I am more like looking just for a simple list of what technologies could be used for this. I will still have to evaluate the technologies and discuss them internally with others, so the list would just be a starting point. Here I am assuming that there exists very few ways to reduce latency between two servers that communicate across public networks using HTTPS. Feel free to correct me if this assumption is wrong and please ask if there is a need for specific information. NOTE 2: A list of technologies is a specific answer to the question I stated in the title. NOTE 3: Its rather dumb to close this question when it is after all about me looking for information and furthermore this information can clearly be useful for others. Anyway luckily there are other sites where I can ask around. StackExchange seems to attached to their own philosophical principles. Many thanks

    Read the article

  • How / Where can I host my Java web application? [closed]

    - by Huliax
    Possible Duplicate: How to find web hosting that meets my requirements? In case you want to skip to the crux of my inquiry just read the bold type. I just finished my CS degree (at 39 years old :-)). For my final project I designed and built a system that can provide local positioning / location awareness to mobile wifi devices (only have Android client thus far). The server receives data from clients, processes it, and responds to the clients with a messages containing information about their respective locations. I would like to continue the project (perhaps release as open source but that is a different discussion). Thus far my server application has been running on the CS department's hardware where I could pretty much do whatever I wanted. I'm getting kicked off that system in a few weeks so I have to find a new home for my server application. I need a host that will let me run my Java server (along w/ mySQL db) -- preferably on the cheap since I haven't yet got a job. I have very little experience with the "real world" of web development / hosting. I'm having trouble figuring out what kind of hosting service will let me run my application as is. If that turns out to be a tall order then I need to know what my options are for changing thing so that I can get up and running with some hosting. As an aside, I'm also researching whether or not I should rewrite this in a different language. Trying to figure out if there is a substantially better (for whatever reason) one for what I'm doing. This might also potentially have a bearing on my hosting needs. One possibility is to write the server in something more widely accepted by hosting services. I have been searching for answers to my question and haven't found quite what I'm looking for. Part of the problem might be that I don't know exactly what terminology to use. If there is a good answer to this question elsewhere please feel free to point me towards it. Thanks for help / advice.

    Read the article

  • Speed up your Silverlight Debugging for large projects

    - by Aligned
    I'm working on a 5+ year old ASP.NET project that has 74+ projects and we've been adding new Silverlight applications to run in the ASP.NET page islands. My machine at work isn't the most powerful, so I find myself waiting a lot for the whole thing to build. I'm using Visual Studio 2010, so that takes up a lot of resources as well. This causes me to get distracted and I start looking at the news... I need to combat that more :-). I can't get a new machine, that's up to someone else, so I've found a few tricks to help. 1. Only build the Silverlight project you're working with. This will build all referenced projects (you can see these by right clicking and clicking Project Dependencies) and package a new XAP (you can see all the actions in your output build window). Then refresh your page with the Silverlight app and it's up-to-date. 2. I was working with a co-worker (thanks Jordan) who was using the the Debug -> attach to processes window. In the Attach to: row there is a "Select..." button. In the dialog, click "Debug these code types:" and select Silverlight. Hit ok. Then all you need to do is find your process (you might need to click the refresh button). I'm usually debugging in IE, so I select the first one and push "i" on the keyboard. That brings me to the IE windows open. Find the one with type of Silverlight, x86. It is usually directly above one with type of x86 that has the page title for "title". Click attach and watch your output window spit out messages about loading debug symbols and your breakpoints enabled (if this doesn't happen you chose the wrong process, hit stop and try again). Now you can debug the client code as normal, server code requires a full F5 or attaching to the correct process. To improve this even further, bind the menu item to a key stroke. I chose ctrl + x, x. (Tools -> Options -> Keyboard, search for Debug.AttachToProcess, set the shortcut keys globaly and assign). Most of the time I build the project, then hit ctrl + x, x then i, then enter and I'm debugging. The process I want is usually the first IE in the list.

    Read the article

  • The Connected Company: WebCenter Portal Activity Streams

    - by Michael Snow
    Guest post by Mitchell Palski, Oracle Staff Sales Consultant Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Social media is sure to have made its way into your company or government organization. Whether its discussion threads, blog posts, Facebook-style profile-pages, or just a simple Instant Messenger application; in one way or another, your employees are connected. What are the objectives of leveraging social media in your organization? Facilitating knowledge transfer More effectively organizing team events Generating inter-community discussions to solve problems Improving resource management Increasing organizational awareness Creating an environment of accountability Do any of the business objectives above stand out to you as needs? If so, consider leveraging the WebCenter Portal Activity Stream as part of your solution. In WebCenter Portal, the Activity Stream feature provides a streaming view of the activities of your connections, actions taken in portals, and business activities that looks a lot like a combined Facebook and Twitter newsfeed. Activity Stream can note when a user: Posts feedback (comments) Uploads a document Creates a new blog, page, event, or announcement Starts a new discussion Streams messages and attachments entered through WebCenter Publisher (similar to Twitter) Through Activity Stream Preferences, you can select which of these activities to show or hide from your personal Activity Stream. Here’s what you get: Real-time stream of activities with in a Portal or sub-Portal increases awareness across your organization or within a working group Complete list of user actions reduces the time-to-find for users that need to interact with the latest activities in your portal Users can publish to their groups when tasks are finished for complete group traceability and accountability, as well as improved resource management. Project discussions and shared documents that require the expertise of someone outside of a working group now get increased visibility across your organization. There’s a reason that commercial Social Media tools like Facebook and Twitter have been so successful – they spread information in an aesthetically appealing and easy to read format.  Strategically placing an Activity Feed within your Portal is analogous to sending your employees a daily newsletter, events calendar, recent documents report, and list of announcements – BUT ALL IN ONE! 

    Read the article

  • HOW TO: Change Internet Expenses Cost Center Prompt

    - by rveliche
    The cost center segment on the General Information page in Oracle Internet Expenses derives its label from the Prompt entered on the KFF setup. Changing this is not possible with the simple personalization, the details below provide the instructions to change the Prompt. Create a custom class, I call it CustomHeaderKffCO.java in the package oracle.apps.ap.oie.entry.header.webui  (or any other). This class will have to extend from oracle.apps.ap.oie.entry.header.webui.HeaderKffCO. Add the following logic to your custom class. package oracle.apps.ap.oie.entry.header.webui; import oracle.apps.fnd.framework.webui.OAPageContext; import oracle.apps.fnd.framework.webui.beans.OAWebBean; import oracle.apps.fnd.framework.webui.beans.message.OAMessageLayoutBean; import oracle.apps.fnd.framework.webui.OAControllerImpl; public class CustomHeaderKffCO extends HeaderKffCO {   public void processRequest(OAPageContext pageContext, OAWebBean webBean)   {      super.processRequest(pageContext, webBean);     OAMessageLayoutBean layoutBean = (OAMessageLayoutBean) webBean.findChildRecursive("KffSEGMENT2MessageLayout");    if(layoutBean != null)   {     // You should use messages/lookups to avoid translation issues.     layoutBean.setLabel("Cost Center");   }   } } KffSEGMENT2MessageLayout is for illustration only, my Chart Of Accounts has SEGMENT2 as the cost center segment. Please change this to a segment being used eg.Segment6 should be KFFSEGMENT6MessageLayout Note that super.processRequest(pageContext, webBean); is a must and should always be the first statement. Once the class is compiled, copy the class to an appropriate directory, in my case I used $JAVA_TOP/oracle/apps/ap/oie/entry/header/webui. Navigate to the General Information page, click on "Personalize General Information Page".Click on Personalize icon next to Message Component Layout: (OIEGeneralInformationMsgCLayout)In the controller class section update the new controller at the appropriate levelIf the Link "Personalize General Information Page" is not visible on your instance, check your personalization profiles.

    Read the article

  • How to Export a Contact to and Import a Contact from a vCard (.vcf) File in Outlook 2013

    - by Lori Kaufman
    vCard is the abbreviation for Virtual Business Card and is the standard format (.vcf files) for electronic business cards. vCards allow you to create and share contact information over the internet, such as in email messages and instant messaging. You can also use vCards to move contact information from one email or personal information management program to another, as long as both programs support the .vcf file format. vCards can contain name and address information, as well as phone numbers, email addresses, URLs, images, and audio clips. We will show you how to export a contact to and import a contact from a vCard, or .vcf file, in Outlook. First access the People section by clicking People at the bottom of the Outlook window. To view your contact in business card format, click Business Card in the Current View section of the Home tab. Select a contact by clicking on the name bar at the top of the business card. To export the selected contact as a vCard, click the File tab. On the Account Information screen, click Save As in the list of options on the left. The Save As dialog box displays. By default, the name of the contact is used to name the .vcf file in the File name edit box. Change the name, if desired, select a location for the file, and click Save. The contact is saved as a .vcf file. To import a vCard, or .vcf file, into Outlook, simply double-click on the .vcf file. By default, .vcf files are automatically associated with Outlook, so the file is opened in Outlook as a Contact. Make any changes or additions to the contact in the contact editing window. To save the contact, click Save & Close in the Actions section of the Contact tab. NOTE: Notice that because this contact is new, the full contact editing window displays rather than the Contact Card that displays when double-clicking on a contact. You can open the full contact editing window instead of the Contact Card when editing a contact or searching for a contact. The contact is added to the Contacts folder. You can add your contact information to a signature in business card format, and it will display as shown above in emails. We have covered how to create signatures and will be discussing more about signatures and business cards in Outlook.     

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >