Search Results

Search found 12333 results on 494 pages for 'memory leaks'.

Page 223/494 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Week 21: FY10 in the Rear View Mirror

    - by sandra.haan
    FY10 is coming to a close and before we dive into FY11 we thought we would take a walk down memory lane and reminisce on some of our favorite Oracle PartnerNetwork activities. June 2009 brought One Red Network to partners offering access to the same virtual kickoff environment used by Oracle employees. It was a new way to deliver valuable content to key stakeholders (and without the 100+ degree temperatures). Speaking of hot, Oracle also announced in June new licensing options for our ISV partners. This model enables an even broader community of ISVs to build, deploy and manage SaaS applications on the same platform. While some people took the summer off, the OPN Program team was working away to deliver a brand new partner program - Oracle PartnerNetwork Specialized - at Oracle OpenWorld in October. Specialized. Recognized. Preferred. If you haven't gotten the message yet, we may need an emergency crew to pull you out from that rock you've been hiding under. But seriously, the announcement at the OPN Forum drew a big crowd and our FY11 event is shaping up to be just as exciting. OPN Specialized was announced in October and opened our doors for enrollment in December 2009. To mark our grand opening we held our first ever social webcast allowing partners from around the world to interact with us live throughout the day. We had a lot of great conversations and really enjoyed the chance to speak with so many of you. After a short holiday break we were back at it - just a small announcement - Oracle's acquisition of Sun. In case you missed it, here is a short field report from Ted Bereswill, SVP North America Alliances & Channels on the partner events to support the announcement: And while we're announcing things - did we mention that both Ted Bereswill and Judson Althoff were named Channel Chiefs by CRN? Not only do we have a couple of Channel Chiefs, but Oracle also won the Partner Program 5 Star Programs Award and took top honors at the CRN Channel Champion Awards for Financial Factors/Financial Performance in the category of Data and Information Management and the and Xchange Solution Provider event in March 2010. We actually caught up with Judson at this event for a quick recap of our participation: But awards aside, let's not forget our main focus in FY10 and that is Specialization. In April we announced that we had over 35 Specializations available for partners and a plan to deliver even more in FY11. We are just days away from the end of FY10 but hope you enjoyed our walk down memory lane. We are already planning lots of activity for our partners in FY11 starting with our Partner Kickoff event on June 29th. Join us to hear the vision and strategy for FY11 and interact with regional A&C leaders. We look forward to talking with you then. The OPN Communications Team

    Read the article

  • New Netra SPARC T3 Servers

    - by Ferhat Hatay
    Today at the Mobile World Congress 2011, Oracle announced two new carrier-grade NEBS Level 3- certified servers: Oracle’s Netra SPARC T3-1 rackmount server and Oracle’s Netra SPARC T3-1BA ATCA blade server bringing the performance, scalability and power efficiency of the newest SPARC T3 processor to the communications market.    The Netra SPARC T3-1 server enclosure has a compact 20inch-deep carrier-grade rack-optimized design The new Netra SPARC T3 servers further expand Oracle’s complete portfolio for the communications industry, which includes carrier-grade servers, storage and application software to run operations support systems and service delivery platforms with easy migration capabilities and unmatched investment protection via the binary compatibility guarantee of the Oracle Solaris operating system. With advanced reliability, networking and security features built-in to Oracle Solaris – the most widely deployed carrier-grade OS – the systems announced today are uniquely suited for mission-critical core network infrastructure and service delivery. The world’s first carrier-grade system using the 16-core, 128-thread SPARC T3 processor, the Netra SPARC T3-1 server supports 2x the I/O bandwidth, 2x the memory and is 35 percent faster than the previous generation. With integrated on-chip 10 Gigabit Ethernet, on-chip cryptographic acceleration, and built-in, no-cost Oracle VM Server for SPARC and Oracle Solaris Containers for virtualization, the Netra SPARC T3-1 server is an ideal platform for consolidation, offering 128 virtual systems in a single server. As the next generation Netra SPARC ATCA blade, Netra SPARC T3-1BA ATCA blade server brings the PICMG 3.0 compatibility, NEBS Level 3 Certification, ETSI compliance and the Netra business practices to the customer solution. The Netra SPARC T3-1BA ATCA blade server can be mixed in the Sun Netra CT900 blade chassis with other ATCA UltraSPARC and x86 blades.     The Netra SPARC T3-1BA ATCA blade server   The Netra SPARC T3-1BA ATCA blade server delivers industry-leading scalability, density and cost efficiency with up to 36 SPARC T3 processors (3456 processing threads) in a single rack – a 50 percent increase over the previous generation. The Netra SPARC T3-1BA blade server also offers high-bandwidth and high-capacity I/O, with greater memory capacity to tackle the increasing business demands of the communications industry. For service providers faced with the rapid growth of broadband networks and the dramatic surge in global smartphone adoption, the new Netra SPARC T3 systems deliver continuous availability with massive scalability, tested and certified to run in the harshest conditions. More information Oracle’s Sun Netra Servers Scaling Throughput and Managing TCO with Oracle’s Netra SPARC T3-1 Servers Enabling End-to-End 10 Gigabit Ethernet in Oracle's Sun Netra ATCA Product Family Data Sheet: Netra SPARC T3-1BA ATCA Blade Server Data Sheet: Netra SPARC T3-1 Server Oracle Solaris: The Carrier Grade Operating System

    Read the article

  • How Assassin’s Creed Should Have Ended [Video]

    - by Asian Angel
    Altair is on the run yet again from Italy’s finest and keeps managing to hide in plain sight. But will his luck hold out or will his final attempt to escape end in tragedy? How It Should Have Ended: Video…: Assassin’s Creed [via Dorkly Bits] How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage

    Read the article

  • MySQL – Scalability on Amazon RDS: Scale out to multiple RDS instances

    - by Pinal Dave
    Today, I’d like to discuss getting better MySQL scalability on Amazon RDS. The question of the day: “What can you do when a MySQL database needs to scale write-intensive workloads beyond the capabilities of the largest available machine on Amazon RDS?” Let’s take a look. In a typical EC2/RDS set-up, users connect to app servers from their mobile devices and tablets, computers, browsers, etc.  Then app servers connect to an RDS instance (web/cloud services) and in some cases they might leverage some read-only replicas.   Figure 1. A typical RDS instance is a single-instance database, with read replicas.  This is not very good at handling high write-based throughput. As your application becomes more popular you can expect an increasing number of users, more transactions, and more accumulated data.  User interactions can become more challenging as the application adds more sophisticated capabilities. The result of all this positive activity: your MySQL database will inevitably begin to experience scalability pressures. What can you do? Broadly speaking, there are four options available to improve MySQL scalability on RDS. 1. Larger RDS Instances – If you’re not already using the maximum available RDS instance, you can always scale up – to larger hardware.  Bigger CPUs, more compute power, more memory et cetera. But the largest available RDS instance is still limited.  And they get expensive. “High-Memory Quadruple Extra Large DB Instance”: 68 GB of memory 26 ECUs (8 virtual cores with 3.25 ECUs each) 64-bit platform High I/O Capacity Provisioned IOPS Optimized: 1000Mbps 2. Provisioned IOPs – You can get provisioned IOPs and higher throughput on the I/O level. However, there is a hard limit with a maximum instance size and maximum number of provisioned IOPs you can buy from Amazon and you simply cannot scale beyond these hardware specifications. 3. Leverage Read Replicas – If your application permits, you can leverage read replicas to offload some reads from the master databases. But there are a limited number of replicas you can utilize and Amazon generally requires some modifications to your existing application. And read-replicas don’t help with write-intensive applications. 4. Multiple Database Instances – Amazon offers a fourth option: “You can implement partitioning,thereby spreading your data across multiple database Instances” (Link) However, Amazon does not offer any guidance or facilities to help you with this. “Multiple database instances” is not an RDS feature.  And Amazon doesn’t explain how to implement this idea. In fact, when asked, this is the response on an Amazon forum: Q: Is there any documents that describe the partition DB across multiple RDS? I need to use DB with more 1TB but exist a limitation during the create process, but I read in the any FAQ that you need to partition database, but I don’t find any documents that describe it. A: “DB partitioning/sharding is not an official feature of Amazon RDS or MySQL, but a technique to scale out database by using multiple database instances. The appropriate way to split data depends on the characteristics of the application or data set. Therefore, there is no concrete and specific guidance.” So now what? The answer is to scale out with ScaleBase. Amazon RDS with ScaleBase: What you get – MySQL Scalability! ScaleBase is specifically designed to scale out a single MySQL RDS instance into multiple MySQL instances. Critically, this is accomplished with no changes to your application code.  Your application continues to “see” one database.   ScaleBase does all the work of managing and enforcing an optimized data distribution policy to create multiple MySQL instances. With ScaleBase, data distribution, transactions, concurrency control, and two-phase commit are all 100% transparent and 100% ACID-compliant, so applications, services and tooling continue to interact with your distributed RDS as if it were a single MySQL instance. The result: now you can cost-effectively leverage multiple MySQL RDS instance to scale out write-intensive workloads to an unlimited number of users, transactions, and data. Amazon RDS with ScaleBase: What you keep – Everything! And how does this change your Amazon environment? 1. Keep your application, unchanged – There is no change your application development life-cycle at all.  You still use your existing development tools, frameworks and libraries.  Application quality assurance and testing cycles stay the same. And, critically, you stay with an ACID-compliant MySQL environment. 2. Keep your RDS value-added services – The value-added services that you rely on are all still available. Amazon will continue to handle database maintenance and updates for you. You can still leverage High Availability via Multi A-Z.  And, if it benefits youra application throughput, you can still use read replicas. 3. Keep your RDS administration – Finally the RDS monitoring and provisioning tools you rely on still work as they did before. With your one large MySQL instance, now split into multiple instances, you can actually use less expensive, smallersmaller available RDS hardware and continue to see better database performance. Conclusion Amazon RDS is a tremendous service, but it doesn’t offer solutions to scale beyond a single MySQL instance. Larger RDS instances get more expensive.  And when you max-out on the available hardware, you’re stuck.  Amazon recommends scaling out your single instance into multiple instances for transaction-intensive apps, but offers no services or guidance to help you. This is where ScaleBase comes in to save the day. It gives you a simple and effective way to create multiple MySQL RDS instances, while removing all the complexities typically caused by “DIY” sharding andwith no changes to your applications . With ScaleBase you continue to leverage the AWS/RDS ecosystem: commodity hardware and value added services like read replicas, multi A-Z, maintenance/updates and administration with monitoring tools and provisioning. SCALEBASE ON AMAZON If you’re curious to try ScaleBase on Amazon, it can be found here – Download NOW. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Linux Live USB Media

    <b>Jamie's Random Musings:</b> "It is pretty common these days for laptops, and even desktops, to be able to boot from a USB flash memory drive. So you can save a little time and a little money by converting various Linux distributions ISO images to bootable USB devices, rather than burning them to CD/DVD."

    Read the article

  • The kernel column by Jon Masters #87

    <b>Linux User and Developer:</b> "The past month saw steady progress toward the final 2.6.34 kernel release, including the announcement of initial Release Candidate kernels 2.6.34-rc1 through 2.6.34-rc4. The latter had an interesting virtual memory bug that added a week of delay..."

    Read the article

  • Week in Geek: Internet Service Providers to Implement New Anti-Piracy Monitoring in July

    - by Asian Angel
    Our latest edition of WIG is filled with news link goodness such as Google’s plans for a Metro version of Chrome, Microsoft’s seeking of a patent for TV-viewing tolls, Encyclopaedia Britannica’s switch to a digital only format, and more. Screenshot by Asian Angel. Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • Storage Configuration

    - by jchang
    Storage performance is not inherently complicated subject. The concepts are relatively simple. In fact, scaling storage performance is far easier compared with the difficulties encounters in scaling processor performance in NUMA systems. Storage performance is achieved by properly distributing IO over: 1) multiple independent PCI-E ports (system memory and IO bandwith is key) 2) multiple RAID controllers or host bus adapters (HBAs) 3) multiple storage IO channels (SAS or FC, complete path) most importantly,...(read more)

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Running Windows Phone Developers Tools CTP under VMWare Player - Yes you can! - But do you want to?

    - by Liam Westley
    This blog is the result of a quick investigation of running the Windows Phone Developer Tools CTP under VMWare Player.  In the release notes for Windows Phone Developer Tools CTP it mentions that it is not supported under VirtualPC or Hyper-V.  Some developers have policies where ‘no non-production code’ can be installed on their development workstation and so the only way they can use a CTP like this is in a virtual machine. The dilemma here is that the emulator for Windows Phone itself is a virtual machine and running a virtual machine within another virtual machine is normally frowned upon.  Even worse, previous Windows Mobile emulators detected they were in a virtual machine and refused to run.  Why VMWare? I selected VMWare as a possible solution as it is possible to run VMWare ESXi under VMWare Workstation by manually setting configuration options in the VMX configuration file so that it does not detect the presence of a virtual environment. I actually found that I could use VMWare Player (the free version, that can now create VM images) and that there was no need for any editing of the configuration file (I tried various switches, none of which made any difference to performance). So you can run the CTP under VMWare Player, that’s the good news. The bad news is that it is incredibly slow, bordering on unusable.  However, if it’s the only way you can use the CTP, at least this is an option. VMWare Player configuration I used the latest VMWare Player, 3.0, running under Windows x64 on my HP 6910p laptop with an Intel T7500 Dual Core CPU running at 2.2GHz, 4Gb of memory and using a separate drive for the virtual machines. I created a machine in VMWare Player with a single CPU, 1536 Mb memory and installed Windows 7 x64 from an ISO image.  I then performed a Windows Update, installed VMWare Tools, and finally the Windows Phone Developer Tools CTP After a few warnings about performance, I configured Windows 7 to run with Windows 7 Basic theme rather than use Aero (which is available under VMWare Player as it has a WDDM driver). Timings As a test I first launched Microsoft Visual Studio 2010 Express for Windows Phone, and created a default Windows Phone Application project.  I then clicked the run button, which starts the emulator and then loads the default application onto the emulator. For the second test I left the emulator running, stopped the default application, added a single button to change the page title and redeployed to the already running emulator by clicking the run button.   Test 1 (1st run) Test 2 (emulator already running)   VMWare Player 10 minutes  1 minute   Windows x64 native 1 minute  < 10 seconds   Conclusion You can run the Windows Phone Developer Tools CTP under VMWare Player, but it’s really, really slow and you would have to have very good reasons to try this approach. If you need to keep a development system free of non production code, and the two systems aren’t required to run simultaneously, then I’d consider a boot from VHD option.  Then you can completely isolate the Windows Phone Developer Tools CTP and development environment into a single VHD separate from your main development system.

    Read the article

  • Different types of Session state management options available with ASP.NET

    - by Aamir Hasan
    ASP.NET provides In-Process and Out-of-Process state management.In-Process stores the session in memory on the web server.This requires the a "sticky-server" (or no load-balancing) so that the user is always reconnected to the same web server.Out-of-Process Session state management stores data in an external data source.The external data source may be either a SQL Server or a State Server service.Out-of-Process state management requires that all objects stored in session are serializable.Linkhttp://msdn.microsoft.com/en-us/library/ms178586%28VS.80%29.aspx

    Read the article

  • New Themes New Benefits (WinForms)

    We believe that working hard on something can be great fun at the end when everything is done and the seeds have resulted in the sweetest fruits. This is the case with the new Theming Mechanism and the new Visual Style Builder which we introduced as of Q1 2010.   I am not going to dive into any details on the new concepts behind all this stuff, but will simply focus on the numbers: both in terms of loading speed and memory usage. As you may already know, the new approach we use to style our controls uses the so called Style Repository which stores style settings that can be reused throughout the whole theme. As a result, we have estimated that the size of our themes has been significantly reduced. For instance, the size of all XML files of the Desert theme sums up to 1.83 MB. The case with the new version of the Desert theme is drastically different. Despite the fact that the new theme consists of more XML files compared to the old, its size is only 707 KB!   Furthermore, we have performed a simple performance test since the common sense tells us that such a great improvement in terms of memory footprint should be followed by a great improvement in terms of speed. We have estimated that loading and applying the new Desert theme to a form containing all RadControls for WinForms takes roughly 30% less time compared to the same operation with the old version of the Desert theme. The following screenshots briefly demonstrate the scenario which we used to estimate the loading time difference between the old and the new Desert theme:     Here, the old Desert theme is applied to all controls on the Form which takes almost 1,3 seconds.     Applying the new Desert theme (based on the new Theming Mechanism) takes about 0,78 seconds.   On top of all these great improvements, we can add the fact that the new Visual Style Builder significantly reduces the time needed to style a control by entirely changing the approach compared to the old version of this tool. You can be sure that we have already prepared some great new stuff for Q1 2010 SP1 that will simplify things further so that designing themes with the new VSB will become more fun than ever!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • New Themes New Benefits (WinForms)

    We believe that working hard on something can be great fun at the end when everything is done and the seeds have resulted in the sweetest fruits. This is the case with the new Theming Mechanism and the new Visual Style Builder which we introduced as of Q1 2010.   I am not going to dive into any details on the new concepts behind all this stuff, but will simply focus on the numbers: both in terms of loading speed and memory usage. As you may already know, the new approach we use to style our controls uses the so called Style Repository which stores style settings that can be reused throughout the whole theme. As a result, we have estimated that the size of our themes has been significantly reduced. For instance, the size of all XML files of the Desert theme sums up to 1.83 MB. The case with the new version of the Desert theme is drastically different. Despite the fact that the new theme consists of more XML files compared to the old, its size is only 707 KB!   Furthermore, we have performed a simple performance test since the common sense tells us that such a great improvement in terms of memory footprint should be followed by a great improvement in terms of speed. We have estimated that loading and applying the new Desert theme to a form containing all RadControls for WinForms takes roughly 30% less time compared to the same operation with the old version of the Desert theme. The following screenshots briefly demonstrate the scenario which we used to estimate the loading time difference between the old and the new Desert theme:     Here, the old Desert theme is applied to all controls on the Form which takes almost 1,3 seconds.     Applying the new Desert theme (based on the new Theming Mechanism) takes about 0,78 seconds.   On top of all these great improvements, we can add the fact that the new Visual Style Builder significantly reduces the time needed to style a control by entirely changing the approach compared to the old version of this tool. You can be sure that we have already prepared some great new stuff for Q1 2010 SP1 that will simplify things further so that designing themes with the new VSB will become more fun than ever!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Storage Configuration

    - by jchang
    Storage performance is not inherently complicated subject. The concepts are relatively simple. In fact, scaling storage performance is far easier compared with the difficulties encounters in scaling processor performance in NUMA systems. Storage performance is achieved by properly distributing IO over: 1) multiple independent PCI-E ports (system memory and IO bandwith is key) 2) multiple RAID controllers or host bus adapters (HBAs) 3) multiple storage IO channels (SAS or FC, complete path) most importantly,...(read more)

    Read the article

  • Apple IIGS emulator?

    - by xiaohouzi79
    What is the best quality Apple IIGS emulator for Ubuntu that is relatively easy to install? I have tried KEGS, but get the following (working without probs on my Windows partition): Preparing X Windows graphics system Visual 0 id: 00000021, screen: 0, depth: 24, class: 4 red: 00ff0000, green: 0000ff00, blue: 000000ff cmap size: 256, bits_per_rgb: 8 Chose visual: 0, max_colors: -1 Will use shared memory for X pipes: pipe_fd = 4, 5 pipe2_fd: 6,7 open /dev/dsp failed, ret: -1, errno:2 parent dying, could not get sample rate from child ret: 0, fd: 6 errno:11

    Read the article

  • Slow Chat with Industry Experts: Developing Multithreaded Applications

    Sponsored by Intel Join the experts who created The Intel Guide for Developing Multithreaded Applications for a slow chat about multithreaded application development. Bring your questions about application threading, memory management, synchronization, programming tools and more and get answers from the parallel programming experts. Post your questions here

    Read the article

  • Slow Chat with Industry Experts: Developing Multithreaded Applications

    Sponsored by Intel Join the experts who created The Intel Guide for Developing Multithreaded Applications for a slow chat about multithreaded application development. Bring your questions about application threading, memory management, synchronization, programming tools and more and get answers from the parallel programming experts. Post your questions here

    Read the article

  • Ralink rt3090 driver installed and wireless doesn't work on Ubuntu 10.04

    - by Marcus Rene
    I have a LG A-410 lap-top (64 bits) with rt 3090 wireless card. Searching the problem I discover that I already have a rt 3090-dkms installed, but my wireless doesn't work. *-network UNCLAIMED description: Network controller product: RT3090 Wireless 802.11n 1T/1R PCIe vendor: RaLink physical id: 0 bus info: pci@0000:02:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:e5400000-e540ffff

    Read the article

  • SSAS Native v .net Provider

    - by ACALVETT
    Recently I was investigating why a new server which is in its parallel running phase was taking significantly longer to process the daily data than the server its due to replace. The server has SQL & SSAS installed so the problem was not likely to be in the network transfer as its using shared memory. As i dug around the SQL dmv’s i noticed in sys.dm_exec_connections that the SSAS connection had a packet size of 8000 bytes instead of the usual 4096 bytes and from there i found that the datasource...(read more)

    Read the article

  • CMUS Error: opening audio device: No such device

    - by clamp
    I cant seem to play any audio with CMUS because it always gives the above error the output of lspci -v | grep -A7 -i "audio" gives 00:1b.0 Audio device: Intel Corporation NM10/ICH7 Family High Definition Audio Controller (rev 02) Subsystem: ASRock Incorporation Device c892 Flags: bus master, fast devsel, latency 0, IRQ 49 Memory at dff00000 (64-bit, non-prefetchable) [size=16K] Capabilities: Kernel driver in use: snd_hda_intel 00:1c.0 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 02) (prog-if 00 [Normal decode]) what could be the problem?

    Read the article

  • Using GPU's RAM as RAMDISK

    - by user3476043
    I want to use my GPU's ram as a ramdisk, following these instructions : http://en.gentoo-wiki.com/wiki/Using_Graphics_Card_Memory_as_Swap But when I input the " modprobe phram phram=VRAM,0xd8400000,124Mi " command, I get the following error : modprobe: ERROR: could not insert 'phram': Input/output error I use Ubuntu Studio 14.04. Also, is there anyway I could use more than the 128M of prefetchable memory, my GPU has 1GB of ram, I would prefer to use "most" of it.

    Read the article

  • EclipseCon 2011

    - by Marcus Hirt
    I sadly could not make it to EclipseCon last year. It was sad for so many reasons, not the least being that Sweden during that part of the year is cold and dark. ;) This year, however, I will be contributing two talks: ---> HotRockit – What to Expect from Oracle’s Converged JVM Oracle is converging the HotSpot and JRockit JVMs to produce a "best of breed JVM". Internally the project is sometimes referred to as the HotRockit project. There is already a large influx of ideas and solutions provided by the JRockit JVM into the Open JDK. Examples of improvements include: Better monitoring and profiling Improved performance Better ergonomics This talk will discuss what to expect from the converged JVM over the next two years, and how this will benefit the Eclipse community. Production-time Problem Solving in Eclipse This session will look at some common problems and pitfalls in Java applications. The focus will be on non-invasive profiling and diagnostics of running production systems. Problems tackled will be: Excessive GC Finding hotspots and optimizing them Optimizing the choice of data structures Synchronization problems Finding out where exceptions are thrown Finding memory leaks All problems will be demonstrated and solved running both the bad-behaving applications and the tools to analyze them from within the Eclipse Java IDE. <--- I hope to meet you there!

    Read the article

  • Problem in installation in My Hp g4 1226se

    - by vivek Verma
    1vivek.100 Dual booting error in Hp pavilion g4 1226se Dear sir or Madam, My name is vivek verma.... I am the user of my Hp laptop which series and model name is HP PAVILION G4 1226SE........ i have purchase in the year of 2012 and month is February.....the windows 7 home basic 64 Bit is already installed in in my laptop.... Now i want to install Ubuntu 12.04 Lts or 13.10 lts..... i have try many time to install in my laptop via live CD or USB installer....and i have try many live CD and many pen drive to install Ubuntu ... but it is not done......now i am in very big problem...... when i put my CD or USB drive to boot and install the Ubuntu......my laptop screen is goes the some black (brightness of my laptop screen is very low and there is very low visibility ) and not showing any thing on my laptop screen..... and when i move the my laptop screen.....then there is graphics option in this screen to installation of the Ubuntu option......and when i press the dual boot with setting button and press to continue them my laptop is goes for shutdown after 2 or 5 minutes..... ...... and Hp service center person is saying to me our laptop hardware has no problem.....please contact to Ubuntu tech support............. show please help me if possible..... My laptop configuration is here...... Hardware Product Name g4-1226se Product Number QJ551EA Microprocessor 2.4 GHz Intel Core i5-2430M Microprocessor Cache 3 MB L3 cache Memory 4 GB DDR3 Memory Max Upgradeable to 4 GB DDR3 Video Graphics Intel HD 3000 (up to 1.65 GB) Display 35,5 cm (14,0") High-Definition LED-backlit BrightView Display (1366 x 768) Hard Drive 500 GB SATA (5400 rpm) Multimedia Drive SuperMulti DVD±R/RW with Double Layer Support Network Card Integrated 10/100 BASE-T Ethernet LAN Wireless Connectivity 802.11 b/g/n Sound Altec Lansing speakers Keyboard Full size island-style keyboard with home roll keys Pointing Device TouchPad supporting Multi-Touch gestures with On/Off button PC Card Slots Multi-Format Digital Media Card Reader for Secure Digital cards, Multimedia cards External Ports 1 VGA 1 headphone-out 1 microphone-in 3 USB 2.0 1 RJ45 Dimensions 34.1 x 23.1 x 3.56 cm Weight Starting at 2.1 kg Power 65W AC Power Adapter 6-cell Lithium-Ion (Li-Ion) What's In The Box Webcam with Integrated Digital Microphone (VGA) Software Operating System: Windows 7 Home Basic 64bit....Genuine..... ......... Sir please help me if possible....... Name =vivek verma Contact no.+919911146737 Email [email protected]

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >