Search Results

Search found 9124 results on 365 pages for 'big sal'.

Page 259/365 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Understanding the levels of computing

    - by RParadox
    Sorry, for my confused question. I'm looking for some pointers. Up to now I have been working mostly with Java and Python on the application layer and I have only a vague understanding of operating systems and hardware. I want to understand much more about the lower levels of computing, but it gets really overwhelming somehow. At university I took a class about microprogramming, i.e. how processors get hard-wired to implement the ASM codes. Up to now I always thought I wouldn't get more done if learned more about the "low level". One question I have is: how is it even possible that hardware gets hidden almost completely from the developer? Is it accurate to say that the operating system is a software layer for the hardware? One small example: in programming I have never come across the need to understand what L2 or L3 Cache is. For the typical business application environment one almost never needs to understand assembler and the lower levels of computing, because nowadays there is a technology stack for almost anything. I guess the whole point of these lower levels is to provide an interface to higher levels. On the other hand I wonder how much influence the lower levels can have, for example this whole graphics computing thing. So, on the other hand, there is this theoretical computer science branch, which works on abstract computing models. However, I also rarely encountered situations, where I found it helpful thinking in the categories of complexity models, proof verification, etc. I sort of know, that there is a complexity class called NP, and that they are kind of impossible to solve for a big number of N. What I'm missing is a reference for a framework to think about these things. It seems to me, that there all kinds of different camps, who rarely interact. The last few weeks I have been reading about security issues. Here somehow, much of the different layers come together. Attacks and exploits almost always occur on the lower level, so in this case it is necessary to learn about the details of the OSI layers, the inner workings of an OS, etc.

    Read the article

  • Curating projects of deceased friends

    - by Ant
    A very good friend of mine, and an avid programmer, recently passed away. He left nearly 40 projects on BitBucket. Most of them are public, but a few of them are marked as private. I've decided to take on curation duties for the projects rather than leave his work to disappear. If you have been in the same situation, what did you do? Did you open-source everything? Continue development? Delete it all? I'm very interested to hear other people's experiences. There are a few reasons why some of the projects are marked as private (private projects on BitBucket are visible only to invited users and the original creator): One of them is an iOS web app that was free in the app store. I've had to remove the app from the store as I'm shutting down his web sites as a favour to his widow. However, I've already made the app public under the GPL v3 (he was a big GPL supporter). One of them contains proprietary code. It can't be open-sourced. Others are very much work-in-progress. I don't know if he intended to make them into hosted, paid services or if he wanted to give the code away under an open-source licence when they were finished. Here's a list of the private projects: Some kind of living cell simulator that uses SBML along with Runge-Kutta and Euler algorithms to do... something. There's a fair amount of code here but I don't know what it does or how far along it is. No docs. An accountacy application; it seems to have a solid DB design behind it but there's little code on top of that. A website whose purpose is to suggest good restaurants. Built on yii. Seems to have a lot of code but I'd need to set up a WAMP stack to see how far along it is. A website intended to host memorials to people who suffered from the same problem he was. Built on Joomla. I'm not sure how much of the code is just Joomla and how much is custom; again, I'd need to get Joomla running to find out. I'd just introduced him to Mercurial and BitBucket. All of the private projects are single commits of codebases he wasn't using version control with/was previously using SVN. I don't have the SVN repositories so I can't see the commit logs.

    Read the article

  • Are You Meeting Social Customer Service Expectations?

    - by Mike Stiles
    Whether it’s B2B or B2C, one sure path to repeat business is making sure your buyer has a memorably pleasant and successful customer service experience with you. If they get that kind of treatment consistently, that’s called a relationship. And those aren’t broken easily. Social customer service, driven by integrated SRM (social relationship management) technology, is the venue that can effectively connect customers not only to the brand, but to other customers. Positive experiences, once administered, don’t just rest with the recipient. They’re published in the form of public raves and peer-to-peer recommendation, a force far more actionable than push advertising. What’s more, your customers have come to expect access to you and satisfaction from you using social. An NM Incite study shows 83% of Twitter users and 71% of Facebook users expect to get an answer from brands the same day they post to them on their social assets. To make sure you’re responding, you’ve got to have a tech platform that’s set up to moderate and alert so you’ll know ASAP a customer needs help. The more integrated your social enterprise is, the faster you can not only respond, but respond with the answer they’re looking for, because your system is connected to the internal resources that can surface the answer or put wheels in motion to rectify the situation in the shortest amount of time possible. But if you go to the necessary lengths to make sure your customers feel valued and important, will they really reward you? The study says 71% of consumers who got quick and effective responses from companies they contacted via social were more likely to recommend the brand to their friends and followers. So yes, sweeping people off their feet pays big dividends in terms of word-of-mouth marketing. But you should be keenly aware of the reverse side of that coin. Give people a negative experience, either in real world or virtual customer service, and that message is highly likely to get amplified through social channels faster and louder. Only 36% of the NM Incite study’s respondents reported that their problems were solved quickly and effectively. 36%? That’s hardly an impressive number. It gets worse. 10% never got so much as a response - at all. Going back to the relationship analogy, companies that are this deep in the ditch where customer service is concerned are making their girl or boyfriends really easy for a competitor to steal. Given the technology tools and data available right now for having an intimate knowledge of the customer, what products they’ve purchased, likely problems with those products, effective resolutions to those problems, and follow-up communication to gauge satisfaction, there are fewer excuses than ever for making the lifeblood of your business feel like you couldn’t care less. @mikestiles

    Read the article

  • Testing and Validation – You Really Do Have The Time

    - by BuckWoody
    One of the great advantages in my role as a Technical Specialist here at Microsoft is that I get to work with so many great clients. I get to see their environments and how they use them, and the way they work with SQL Server. I’ve been a data professional myself for many years. Over that time I’ve worked with many database platforms, lots of client applications, and written a lot of code in many industries. For a while I was also a consultant, so I got to see how other shops did things as well. But because I now focus on a “set” base of clients (over 500 professionals in over 150 companies) I get to see them over a longer period of time. Many of them help me understand how they use the product in their projects, and I even attend some DBA regular meetings. I see the way the product succeeds, and I see when it fails. Something that has really impacted my way of thinking is the level of importance any given shop is able to place on testing and validation. I’ve always been a big proponent of setting up a test system and following a very disciplined regimen to make sure it will work in production for any new projects, and then taking the lessons learned into production as standards. I know, I know – there’s never enough time to do things right like this. Yet the shops I see that do it have the same level of work that they output as the shops that don’t. They just make the time to do the testing and validation and create a standard that they will follow in production. And what I’ve found (surprise surprise) is that they have fewer production problems. OK, that might seem obvious – but I’ve actually tracked it and those places that do the testing and best practices really do save stress, time and trouble from that effort. We all think that’s a good idea, but we just “don’t have time”. OK – but from what I’m seeing, you can gain time if you spend a little up front. You may find that you’re actually already spending the same amount of time that you would spend in doing the testing, you’re just doing it later, at night, under the gun. Food for thought.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Will not supporting IE or older browsers drive away potential visitors/users of my site? [closed]

    - by XToro
    Normally a SO browser but this question doesn't fit there, hopefully it fits here. I just want to ask from web designers' point of view if it's wrong to not care about supporting Internet Explorer or older browsers. The site I'm designing looks great in all browsers except IE9-. There are certain things that IE doesn't support or behave like other browsers; webkit stuff, some CSS styles, drop-and-drop files from OS etc etc, but it all works great in Safari, FireFox, Chrome etc. Should I be that concerned? I know there are several people that use IE, but it's limitations have just been causing me more work by having to come up with workarounds. From what I've read, many of the issues I've been having should be solved with IE10, but not everybody keeps up to date. I know of several people who are still using IE6! Again, I'm hoping this is the right place to ask a question like this, and if not, please point me to the right stack exchange site instead of just downvoting me. Thanks! EDIT: Upon further research.... So far this year, IE(all versions) and Chrome have been neck and neck as the top, with IE only squeaking by Chrome, and FireFox a close 3rd. But looking at the top 10 browsers, IE6 doesn't even show up on this list in which the lowest percentage is 1.92%. Source : http://www.w3counter.com/globalstats.php?year=2012&month=7 Having a look at this other site, IE6 shows up in 11th place out of 12, just before "Other" http://www.sitepoint.com/browser-trends-february-2012/ This makes me a little more wary of not spending more time on IE compatibility. However, my site will not be going to a live beta until October or November, and I'm hoping that IE10 will have more features coded into it. Currently, I've written my upload page which is a "drag-and-drop files from the OS" type to simply display "IE is not supported", leaving no other option for IE users to upload pictures because I've spent so much time writing the uploader which does many things other than just upload the files. I will be changing this kinda cold "Access Denied" to a suggestion to upgrade, or install other browsers, with download links for each. Big thanks for the posts here and the interesting links!

    Read the article

  • SSMS Tools Pack 3.0 is out. Full SSMS 2014 support and improved features.

    - by Mladen Prajdic
    With version 3.0 the SSMS 2014 is fully supported. Since this is a new major version you'll eventually need a new license. Please check the EULA to see when. As a thank you for your patience with this release, everyone that bought the SSMS Tools Pack after April 1st, the release date of SQL Server 2014, will receive a free upgrade. You won't have to do anything for this to take effect. First thing you'll notice is that the UI has been completely changed. It's more in line with SSMS and looks less web-like. Also the core has been updated and rewritten in some places to be better suited for future features. Major improvements for this release are: Window Connection Coloring Something a lot of people have asked me over the last 2 years is if there's a way to color the tab of the window itself. I'm very glad to say that now it is. In SSMS 2012 and higher the actual query window tab is also colored at the top border with the same color as the already existing strip making it much easier to see to which server your query window is connected to even when a window is not focused. To make it even better, you can not also specify the desired color based on the database name and not just the server name. This makes is useful for production environments where you need to be careful in which database you run your queries in. Format SQL The format SQL core was rewritten so it'll be easier to improve it in future versions. New improvement is the ability to terminate SQL statements with semicolons. This is available only in SSMS 2012 and up. Execution Plan Analyzer A big request was to implement the Problems and Solutions tooltip as a window that you can copy the text from. This is now available. You can move the window around and copy text from it. It's a small improvement but better stuff will come. SQL History Current Window History has been improved with faster search and now also shows the color of the server/database it was ran against. This is very helpful if you change your connection in the same query window making it clear which server/database you ran query on. The option to Force Save the history has been added. This is a menu item that flushes the execution and tab content history save buffers to disk. SQL Snippets Added an option to generate snippet from selected SQL text on right click menu. Run script on multiple databases Configurable database groups that you can save and reuse were added. You can create groups of preselected databases to choose from for each server. This makes repetitive tasks much easier New small team licensing option A lot of requests came in for 1 computer, Unlimited VMs option so now it's here. Hope it serves you well.

    Read the article

  • Lazy Processing of Streams

    - by Giorgio
    I have the following problem scenario: I have a text file and I have to read it and split it into lines. Some lines might need to be dropped (according to criteria that are not fixed). The lines that are not dropped must be parsed into some predefined records. Records that are not valid must be dropped. Duplicate records may exist and, in such a case, they are consecutive. If duplicate / multiple records exist, only one item should be kept. The remaining records should be grouped according to the value contained in one field; all records belonging to the same group appear one after another (e.g. AAAABBBBCCDEEEFF and so on). The records of each group should be numbered (1, 2, 3, 4, ...). For each group the numbering starts from 1. The records must then be saved somewhere / consumed in the same order as they were produced. I have to implement this in Java or C++. My first idea was to define functions / methods like: One method to get all the lines from the file. One method to filter out the unwanted lines. One method to parse the filtered lines into valid records. One method to remove duplicate records. One method to group records and number them. The problem is that the data I am going to read can be too big and might not fit into main memory: so I cannot just construct all these lists and apply my functions one after the other. On the other hand, I think I do not need to fit all the data in main memory at once because once a record has been consumed all its underlying data (basically the lines of text between the previous record and the current record, and the record itself) can be disposed of. With the little knowledge I have of Haskell I have immediately thought about some kind of lazy evaluation, in which instead of applying functions to lists that have been completely computed, I have different streams of data that are built on top of each other and, at each moment, only the needed portion of each stream is materialized in main memory. But I have to implement this in Java or C++. So my question is which design pattern or other technique can allow me to implement this lazy processing of streams in one of these languages.

    Read the article

  • What's a good way to get an IT internship? [closed]

    - by user1419715
    I'm a second year CS student who's worked really hard to build and expand my skills. I've spent the past week now trying to find a place to volunteer (i.e. work for FREE) so I can get a little bit of in-the-door experience with web development. I have a portfolio with several decent projects, a handful of languages and other hard/soft skills that employers constantly say they're clamoring for. I can't even get people to take my calls. This is me offering to work for them for FREE, remember. I'm in a reputable program at a respected school, get decent grades and...yeah, I've worked really hard to be presentable. On the rare occassions I actually get to speak to somebody at a design firm they hedge and do everything they can to get me off the phone. Nobody's ever expressed even the slightest interest in taking me on. The answer to the experience problem is supposed to be "you need to spend a year or two building up a big portfolio of projects on your own" so that employers will be impressed. I've done that. Websites, standalone apps, etc.. Nobody will even look at my resume, though. Question: Why does there seem to be so little interest in taking on upaid interns in the world of IT? Update: Sorry you all think I'm too aggressive or angry. It wasn't my intent to be a jerk to people while asking them for their opinions. That said, how would you feel if employer after employer turned you down cold when you offered yourself to them without asking for remuneration? One can't even get an unpaid job in this economy now, it seems. How am I going about my search? I find web firms in my area and contact them via email with a brief sales pitch of myself and a resume attached. Then a couple of days later I follow up with a phone contact. Nobody--anywhere--is advertising for interns of any kind. If there were I'm sure there'd be about 500 resumes per position, even unpaid. I've had good experiences in the past with cold-calling firms for actual paid jobs in other industries (hiring is a pain in the ass process and a call like this can show initiative while reducing a busy employer's need to do all the hiring overhead work), so I thought volunteering would work at least as well. My skills are pretty good for a CS student and include the usual suspects: HTML/CSS/Javascript, Python, Java, C, C#/.Net etc etc. I made a point on my resume to tie each ability claim to a project as well. Oh, and regarding the "working for free still costs the employer money" argument: that's an excellent point I hadn't though of. But it means...what? I have to pay the employer for the privilege of working there now?

    Read the article

  • How bad does it look to have left a job soon after starting? [closed]

    - by unitedgremlin
    I have a job I would like to leave. On advice from friends and parents I have stayed. Their primary concern is that it would look bad on my resume if I left only after a few months of joining. My concerns with the job are as follows: When I started it was preferred I provide and use my own equipment. Could be out of business in a few months from lack of cash flow Poor code quality: memory leaks and lack of error handling. The same mistakes continue to be made even though I have raised the issue. It has become evident that co-workers do not understand memory management rules of the platform and are not interested in learning them. Yet, there is still surprise from them when strange bugs continue to crop up. As a result don't feel I will learn from co-workers. Plus, fixing the the lingering bugs and trying to keep up on new feature development is like a game of whack-o-mole that never ends. I don't believe in the companies vision or its ability to execute on the ideas anymore. My ideas and suggestions for very small tweaks are quickly dismissed. And so more than half or so have come back as bugs that we end up needing to address. I have been told to wait on fixing bugs in codes until we can talk to the original author. I don't feel I am allowed to take initiative to just fix/change things and do what I think is best. Everything needs consensus even for a bug fix before any work is done. I am adopting a shut-up and just do what I am told approach to save myself from ulcers. Lots of meetings (I am personally not involved in all of them which is good) but the sheer amount reminds me of days at a big corporation. Why is everyone around me always meeting? It's a small company. I can count everyone on my toes and fingers. I can say with certainty I have no interest in working with any of them again. This is the first time I have truly worked with a group of so called "B and C players". Ultimately, I think it is my fault for not doing a better job evaluating the team and company before joining. But I have generated a better set of questions when probing companies in the future. My questions are: How bad does it look to have left a job soon (few months) after starting? What would be the best way to explain my concerns and reasons for leaving without badmouthing the company? Should I stick it out to what I believe will be the soon end of the company?

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Automatically change resolution when not in dock

    - by jwir3
    I have Ubuntu 11.04 (yep, I know it's old news) on my Lenovo W520. At home, I have a dock with dual monitors. I have a pretty decent setup - things work almost perfectly (hence the reason I'm reluctant to upgrade... that and I'm not 100% sold on Unity). Anyway, the only annoyance I have is that when I'm on travel, I use the laptop screen. When I un-dock the laptop, I need to manually go into nvidia x-server settings and change the resolution from 'Auto' to 1920x1200, or it will think I have two screens, and my mouse pointer will be able to go way off the left side of the screen. This isn't a big deal, but I need to do it every time I restart the x-server (so if I reboot, or have to kill it, etc...) What would be really nice is if there was a way for it to automatically detect whether or not there is external monitors (which it seems to do already), and switch into the mode I select, depending on which monitors are connected. Is there any way to accomplish this? I've posted my xorg.conf file for reference. # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 270.29 (buildd@allspice) Fri Feb 25 14:42:07 UTC 2011 # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 275.19 ([email protected]) Tue Jul 12 18:35:38 PDT 2011 #Section "Monitor" # Identifier "Monitor1" # VendorName "Lenovo" # ModelName "ThinkpadLCD" # #HorizSync 28.0 - 33.0 # #VertRefresh 43.0 - 72.0 # #Option "DPMS" #EndSection Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "DELL U2410" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro 1000M" Option "RegistryDwords" "EnableBrightnessControl=1" EndSection Section "Screen" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+120, DFP-6: nvidia-auto-select +1920+0" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+120, DFP-5: nvidia-auto-select +1920+0" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +1920+419, DFP-5: nvidia-auto-select +3840+0, DFP-6: nvidia-auto-select +0+0" # Removed Option "metamodes" "DFP-5: nvidia-auto-select +0+0, DFP-6: 1920x1200 +1920+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "NoLogo" "True" Option "TwinViewXineramaInfoOrder" "DFP-0" Option "TwinView" "1" Option "metamodes" "DFP-5: nvidia-auto-select +1920+0, DFP-6: 1920x1200 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Taking too long to get skills for entry level programmer position [closed]

    - by greenonion
    I don't have the skills for an entry level position as a .Net programmer. I am trying to learn what I need but there is too much to learn and too little time. What can I do? About two months ago, I went to a job interview for an entry level C# .Net programming/consultant position in NYC. When I heard back from them, they told me that the knowledge gap between what I knew and what they needed me to know was too big and I might have been a better fit if I had 6 months of experience. This was the first interview that I went on since graduating college. before the interview, I read a book on visual C#. Turns out it wasn't a very good book and I was missing a lot of key areas of knowledge such as ADO.net SQL (I had learned some LINQ) A little bit about how memory is handled Multiple threaded programming, etc. Because the book wasn't very good, the stuff I did know, I didn't know very well. I felt crushed. I've applied for jobs to gain experience but when recruiters hear that I have no experience they lose interest. I figured that I can at least work on my knowledge. Since then, I read "SQL Essentials" to cover the SQL bit and I found a pretty awesome book that is good enough to clear up what's hazy in my mind and covers almost all of the extra topics. The book is "C# 4.0: The Complete Reference" by Herbert Schildt. I'm even learning a lot about the topics I was familiar with. For a month now I've been working my way through this beast of a book. However, gaining the knowledge I need is taking too long. I can't hold off not having a full-time job much longer. I'm not stupid and I'm studying constantly pouring through the book, asking questions on stackoverflow, referencing the C# specification, etc. I have made great progress but there is just too much ground to cover. I'm on chapter 12 which is about a 3rd through the book. To get an idea of what I know vs don't know, the table of contents is on amazon: http://www.amazon.com/C-4-0-The-Complete-Reference/dp/007174116X How on earth can someone know enough to function as a programmer in the real world? Can I try for a job in academia? Will I have time to finish learning the rest of the C# language or am I just un-hireable?

    Read the article

  • Application workflow

    - by manseuk
    I am in the planning process for a new application, the application will be written in PHP (using the Symfony 2 framework) but I'm not sure how relevant that is. The application will be browser based, although there will eventually be API access for other systems to interact with the data stored within the application, again probably not relavent at this point. The application manages SIM cards for lots of different providers - each SIM card belongs to a single provider but a single customer might have many SIM cards across many providers. The application allows the user to perform actions against the SIM card - for example Activate it, Barr it, Check on its status etc Some of the providers provide an API for doing this - so a single access point with multiple methods eg activateSIM, getStatus, barrSIM etc. The method names differ for each provider and some providers offer methods for extra functions that others don't. Some providers don't have APIs but do offer these methods by sending emails with attachments - the attachments are normally a CSV file that contains the SIM reference and action required - the email is processed by the provider and replied to once the action has been complete. To give you an example - the front end of my application will provide a customer with a list of SIM cards they own and give them access to the actions that are provided by the provider of each specific SIM card - some methods may require extra data which will either be stored in the backend or collected from the user frontend. Once the user has selected their action and added any required data I will handle the process in the backend and provide either instant feedback, in the case of the providers with APIs, or start the process off by sending an email and waiting for its reply before processing it and updating the backend so that next time the user checks the SIM card its status is correct (ie updated by a backend process). My reason for creating this question is because I'm stuck !! I'm confused about how to approach the actual workflow logic. I was thinking about creating a Provider Interface with the most common methods getStatus, activateSIM and barrSIM and then implementing that interface for each provider. So class Provider1 implements Provider - Then use a Factory to create the required class depending on user selected SIM card and invoking the method selected. This would work fine if all providers offered the same methods but they don't - there are a subset which are common but some providers offer extra methods - how can I implement that flexibly ? How can I deal with the processes where the workflow is different - ie some methods require and API call and value returned and some require an email to be sent and the next stage of the process doesn't start until the email reply is recieved ... Please help ! (I hope this is a readable question and that this is the correct place to be asking) Update I guess what I'm trying to avoid is a big if or switch / case statement - some design pattern that gives me a flexible approach to implementing this kind of fluid workflow .. anyone ?

    Read the article

  • PMDB Block Size Choice

    - by Brian Diehl
    Choosing a block size for the P6 PMDB database is not a difficult task. In fact, taking the default of 8k is going to be just fine. Block size is one of those things that is always hotly debated. Everyone has their personal preference and can sight plenty of good reasons for their choice. To add to the confusion, Oracle supports multiple block sizes withing the same instance. So how to decide and what is the justification? Like most OLTP systems, Oracle Primavera P6 has a wide variety of data. A typical table's average row size may be less than 50 bytes or upwards of 500 bytes. There are also several tables with BLOB types but the LOB data tends not to be very large. It is likely that no single block size would be perfect for every table. So how to choose? My preference is for the 8k (8192 bytes) block size. It is a good compromise that is not too small for the wider rows, yet not to big for the thin rows. It is also important to remember that database blocks are the smallest unit of change and caching. I prefer to have more, individual "working units" in my database. For an instance with 4gb of buffer cache, an 8k block will provide 524,288 blocks of cache. The following SQL*Plus script returns the average, median, min, and max rows per block. column "AVG(CNT)" format 999.99 set verify off select avg(cnt), median(cnt), min(cnt), max(cnt), count(*) from ( select dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) , count(*) cnt from &tab group by dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) ) Running this for the TASK table, I get this result on a database with an 8k block size. Each activity, on average, has about 19 rows per block. Enter value for tab: task AVG(CNT) MEDIAN(CNT) MIN(CNT) MAX(CNT) COUNT(*) -------- ----------- ---------- ---------- ---------- 18.72 19 3 28 415917 I recommend an 8k block size for the P6 transactional database. All of our internal performance and scalability test are done with this block size. This does not mean that other block sizes will not work. Instead, like many other parameters, this is the safest choice.

    Read the article

  • What is the right way to group this project into classes?

    - by sigil
    I originally asked this on SO, where it was closed and recommended that I ask it here instead. I'm trying to figure out how to group all the functions necessary for my project into classes. The goal of the project is to execute the following process: Get the user's FTP credentials (username & password). Check to make sure the credentials establish a valid connection to the FTP server. Query several Sharepoint lists and join the results of those queries to create a list of items that need to have action taken on them. Each item in the list has a folder. For each item: Zip the contents of the folder. Upload the folder to the FTP server using SFTP Update the item's Sharepoint data. Email the user an Excel report showing, e.g., Items without folder paths Items that failed to zip or upload Steps 2-5 are performed on a periodic basis; if step 2 returns an invalid connection, the user is alerted and the process returns to step 1. If at any point the user presses a certain key, the process terminates. I've defined the following set of classes, each of which is in its own .cs file: SFTP: file transfer processes DataHandler: Sharepoint data retrieval/querying/updating processes. Also makes and uploads the zip files. Exceptions: Not just one class, this is the .cs file where I have all of my exception classes. Report: Builds and sends the report. Program: The main class for running the program. I recognize that the DataHandler class is a god object, but I don't have a good idea of how to refactor it. I feel like it should be more fine-grained than just breaking it into Sharepoint, Zip, and Upload, but maybe that's it. Also, I haven't yet worked out how to combine the periodic behavior with the "wait for user input at any point in the process" part; I think that involves threads, which means other classes to manage the threads... I'm not that well-versed in design patterns, but is there one that fits this project well? If this is too big of a topic to neatly explain in an SO answer, I'll also accept a link to a good tutorial on what I'm trying to do here.

    Read the article

  • JOB OF THE WEEK

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} My name is Pascaline and I am the EMEA Solution Response Manager. I currently have a role open for a Benelux Solution Response Representative to jump-start his/her career in my international team of six people from all across Europe. Key for this exciting role is that you are curious to learn, like networking and constantly want to develop yourself. To help you with that, you will get extensive product trainings and workshops on all Oracle product lines and you will receive sales training. Further, you have the opportunity to get certified on Oracle products through online trainings and workshops. Every month you will also benefit from 1-on-1 sales coaching and regular coaching from me to help you grow and develop your career at Oracle! The role will include the follow-up of marketing events and online marketing activities with current Key Accounts in the Benelux. It is truly a pioneering role at Oracle as it is the first time that an employee will engage in business conversations about all lines of businesses and products ranges with Key Accounts. So are you interested to work in between marketing and sales? Do you want to work for a big IT multinational? Do you want to work abroad after you graduate and do you want to develop yourself? Then please visit this link for more information.

    Read the article

  • Differences between a retail and wholesale website, and how difficult would it be to integrate them? [closed]

    - by kmy
    I was told to come here for some guidance, I know that my questions can be quite broad, but I just really want some kind of direction to go towards (links, articles, etc). So here it goes: For the past month I have been planning and started implementing a wholesale website, but plans got changed on me and I need to allow both retail and wholesale orders which will also depend on whether the account is a regular customer or also has a seller account. There are various types of products and it has been decided that I would use subcategories as tables to organize them and allow various specific item fields. So here are some of my specific questions: How different is the database structure for each item to be available for both wholesale and retail? Will it just be adding a quantity column and have two different pricing, or much more complicated? I am unaware if there's any price tables but would that be more difficult? They use Quickbooks POS software, how difficult/inefficient will it be to update inventory quantities if they have like 2000-4000 products? And what would be the best ways to extract the information and update the system? I know it export an excel spreadsheet so maybe a suggested php plugin for it? How difficult is this project in your professional opinion, and how big should a team usually be? (At the moment it is just me.) Projected time for planning, implementation, and quality assurance in accordance to team size? I am an entry level developer and I know that I do not have enough knowledge to direct myself on this website...What kind of web developer skills will I need to find to help me? (My company is willing to hire people to get this website done as fast as possible.) Also, what would be some great questionnaires to ask the product stakeholder about what he wants from the website? (He has made it clear he is neither computer or internet savvy...) Sorry for the amount of questions, I'm an entry level web developer and do not have a senior to look up to for guidance. I have knowledge in HTML, CSS, Javascript, jQuery, PHP, MySQL, phpMyAdmin. I have never used any frameworks like Zend, Magento, etc, and is doing this all from scratch. So far the website is built in an object oriented way, MVC architecture to the best of my abilities, but I have many doubts because I would really want to have this done right. If I have been unclear on some parts please tell me and I'll add more detail. I'm sure I'll have more additional questions later on, if anyone is open to that please tell me. Thanks in advance!

    Read the article

  • Postgres - could not create any TCP/IP sockets

    - by Jacka
    I'm running a rails app in development with postgresql 9.3. When I tried to start passenger server today, I got: PG::ConnectionBad - could not connect to server: Connection refused Is the server running on host "localhost" (217.74.65.145) and accepting TCP/IP connections on port 5432? No big deal I thought, that happened before. Restarting postgres always solved the problem. So I ran sudo service postgresql restart and got: * Restarting PostgreSQL 9.3 database server * The PostgreSQL server failed to start. Please check the log output: 2014-06-11 10:32:41 CEST LOG: could not bind IPv4 socket: Cannot assign requested address 2014-06-11 10:32:41 CEST HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. 2014-06-11 10:32:41 CEST WARNING: could not create listen socket for "localhost" 2014-06-11 10:32:41 CEST FATAL: could not create any TCP/IP sockets ...fail! My postgresql.conf points to the defaults: localhost and port 5432. I tried changing the port but the error message is the same (except the port change). Both ps aux | grep postgresql and ps aux | grep postmaster return nothing. EDIT: In postgresql.conf I changed listen_addresses to 127.0.0.1 instead of localhost and it did the trick, server restarted. I also had to edit my applications' db config and point to 127.0.0.1 instead of localhost. However, the question is now, why is localhost considered to be 217.74.65.145 and not 127.0.0.1? That's my /etc/hosts: 127.0.0.1 local 127.0.1.1 jacek-X501A1 127.0.0.1 something.name.non.example.com 127.0.0.1 company.something.name.non.example.com

    Read the article

  • Unpacking a ZTE ZXV10 H108L router firmware

    - by v3ng3ful
    I binwalked a firmware of a ZTE ZXV10 H108L, and got some encouraging results of uImage uboot, and LZMA compression, as well as a Squashfs 3.0 LZMA compressed filesystem. 256 0x100 uImage header, header size: 64 bytes, header CRC: 0xE70BCBB9, created: Thu Nov 10 04:54:54 2011, image size: 804172 bytes, Data Address: 0x80002000, Entry Point: 0x80266000, data CRC: 0x6EFE90F1, OS: Linux, CPU: MIPS, image type: OS Kernel Image, compression type: lzma, image name: MIPS Linux-2.6.20 320 0x140 LZMA compressed data, properties: 0x5D, dictionary size: 8388608 bytes, uncompressed size: 2637958 bytes 851968 0xD0000 Squashfs filesystem, big endian, lzma signature, version 3.0, size: 2543403 bytes, 632 inodes, blocksize: 65536 bytes, created: Thu Nov 10 04:56:12 2011 Now what I did is, to test several portions of the file (320byte-end, 851968byte-end, and many more) using dd, and trying with certain tools to uncompress/unpack the filesystem of the firmware. After some digging I found out the best tool to do this is the firmware_mod_kit, that understands a squashfs-lzma v3 filesystem. Although I ended up really frustrated as unsquashfs-lzma v3 reported a cold "zlib::uncompress failed, unknown error -3". Do you have any ideas? Could it be that, the firmware is corrupted on purpose to discourage attempts like this? Router file Thanks

    Read the article

  • How to stop DW20.exe running on Win 2003 Server (usual solutions already tried)

    - by Laurence
    Periodically my ASP.NET application crashes (usually because memory consumption exceeds maximum allowed by application pool) and DW20.exe starts up. This is a big problem because it uses huge amounts of memory and CPU for minutes at a time. I want to know how to stop DW20.exe from running. Please note, I have already tried these often mentioned solutions: Disabling error reporting in Control Panel System Advanced Error Reporting Disabling the Error Reporting Service Modifying the registry as in http://support.microsoft.com/kb/841477 (however I might have done this wrong - this doc says "add a DWReportee value of 1" - what I did was add a DWORD entry with hexadecimal value of 1 - is this right? Also only 2 of the 4 keys were present, so I only modified these, e.g. there was no HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\PCHealth key at all) So, zero points for suggesting any of the above (unless you can see I have modified the registry incorrectly) ! Also zero points for suggesting I resolve whatever is causing the application crashes :) - I am figuring this out, I just want something in the mean time to stop DW20.exe eating up all the server resources. By the way, this is a Windows 2003 SP1 server, with IIS 6 and SQL 2005 installed. There is no MS Office. Thx

    Read the article

  • Windows Server 2003 Trial Activation Issue

    - by Adam Batkin
    I have a Windows Server 2003 (R2 Enterprise with SP2) VM, originally installed with a trial license. We forgot about the server, and now more than 120 days has passed, and I can't do anything with the server. I seem to be at a dead end with the existing installation. When I log in, I get: The evaluation period for this copy of Windows has ended. Windows cannot start. To continue using Windows, please purchase and install a retail copy of the product. Fine. I'll do that with my MSDN media. I should add that safe mode works, but there isn't anything obvious that I found to help me there Next up, I tried repairing my installation: Boot from Server 2003 R2 Enterprise with SP2 media, tell it I want to install (as opposed to recovery console), then let it repair the existing install. Once that completes and reboots I log in: This copy of Windows must be activated with Microsoft before you can continue. You cannot log on until you activate Windows. Do you want to activate Windows now? To shut down the computer, click Cancel. Great! I click "Yes" and am left with a big blue screen. Not a blue screen of death, just a blue screen (i.e. the default windows desktop background color). No Ctrl+Alt+Del. All I can do is power cycle. I have some complex third-party software on there that I can't reinstall, which is why I haven't already built a fresh Windows VM and copied everything over. I have a backup of the VM from after trial period expired but before I installed anything. Ideas?

    Read the article

  • SBS 2008 BPA Warnings After Migration From SBS 2003

    - by Nicholas Piasecki
    We just finished a we-know-just-enough-to-be-dangerous migration from SBS 2003 to SBS 2008, and things seem to have gone relatively smoothly. After running the SBS 2008 Best Practices Analyzer on the destination server, we've got three warning messages, and I can't tell if they're important or not. First, the easy one: SMTP Port (TCP 25 Status): The Edgetransport.exe process should listen on SMTP port 25, but that port is owned by the process. I don't think that this one is a big deal--e-mail is flowing through the SMTP connector. Since there are two spaces between "the" and "process," I'm assuming that for some reason BPA just couldn't figure out the owning process name and this is just some sloppy programming when displaying the message. (Indeed, on subsequent runs of the BPA this message goes away, and other times it comes back.) Now, two more scary sounding ones: No DNS name server records: There are no DNS name server (NS) resource records in the _msdcs sub-domain in the forward lookup zone for Windows SBS 2008. and, similarly, No DNS name server records: There are no DNS name server (NS) resource records in the _msdcs zone for Windows SBS 2008. Now for these two, everything appears to be functioning correctly--but I'm assuming this is a weird state as a result of the SBS 2003 to 2008 migration. Can anyone provide any pointers on how to fix it, or whether or not it can be safely ignored? Thanks!

    Read the article

  • How to use Salt Stack with minions all behind NAT (not publicly accessible, default salt ports not open)?

    - by MountainX
    Can Salt Stack minions communicate with the salt master from behind NAT/Firewalls, etc., using standard ports that would be open be default in all consumer NAT routers (and without the minions having a public DNS record or static IP)? I'm working my way through my first salt tutorial, and this is where I'm stuck. I am able to configure iptables on the Ubuntu salt-master. But I have no control over the routers/NAT that the minions will sit behind. So far I tried these settings: /etc/salt/master: publish_port: 465 ret_port: 443 /etc/salt/minion: master_port: 465 That did not work. Background: I have a custom developed application presently running on about 40 Kubuntu laptops (& more planned). Every few months I have to update the application. (Often this just amounts to replacing a .jar file, which requires root permissions.) I also have to run Ubuntu updates and a few other minor things. I've been doing it manually, one by one, using Team Viewer to log into each client. I would like to dramatically improve this process. The two options I'm aware of are either: use reverse ssh tunnels and bash scripts. I tested this and it works. But I don't get any of the reporting, etc., I would get with Salt Stack. use Salt Stack (or similar) management tool. But I need a really simple tool. I can't invest any time in a big learning curve. I looked at Puppet and a bunch of related tools. The only one I found that looked simple enough for me (so far) was Salt Stack. But I'm stuck now because my minion can't reach the salt-master, as stated above. I appreciate suggestions.

    Read the article

  • Excel techniques for perfmon csv log file analysis

    - by Aszurom
    I have perfmon running against several servers, where I'm outputting to a .csv file data like CPU %time, memory bytes free, hard disk I/O metrics like s/write and writes/s. The ones graphing the SQL servers are also collecting SQL stats. The web servers are collecting .Net relevant stuff. I am aware of PAL, and used it as a template of what data to capture based on server type actually. I just don't think the output it generates is detailed or flexible enough - but it does a pretty remarkable job of parsing logs and making graphs. I'm borderline incompetent with Excel, so I'm hoping to be directed to some knowledge of how to take a perfmon output .csv and mine it in Excel to produce some numbers that are meaningful to me as a sysadmin. I could of course just pick a range of data and assemble a graph out of that and look for spikes and trends, but I'm convinced there is some technique to this that makes it more manageable than looking at a monsterous spreadsheet of numbers and trying to make graphs of it. Plus, it's pretty time consuming and not something I can do as a "take a glance at the servers" sort of routine. I'm graphing CPU, disk use, network b/sec, etc. in Cacti as well, which is nice for seeing big trends. The problem is that it is 5 minute averages, so a server could have a problem but it's intermittent and washes out in a 5 min average. What do you do with perfmon data that I could learn from?

    Read the article

  • Drop outs when accessing share by DFS name.

    - by Stephen Woolhead
    I have a strange problem, aren't they all! I have a DFS root \domain\files\vms, it has a single target on a different server than the namespace. I can copy a test file set from the target directly via \server\vms$\testfiles and all is well, the files copy fine. I have repeated these tests many times. If I try and copy the files from the dfs root I get big pauses in the network traffic, about 50 seconds every couple of minutes, all the traffic just stops for the copy. If I start another copy between the same two machines during this pause, it starts copying fine, so I know it's not an issue with the disks on the server. Every once in a while the copy will fail, no errors, the progress bar will just zip all the way to 100% and the copy dialog will close. Checking the target folder show that the copy is incomplete. I've moved the LUN to another server and had the same problem. The servers are all 2008 R2, the clients are Vista x64, Windows7 x64 and 2008 R2, all have the same problem. Anyone got any ideas? Cheers, Stephen More Information: I've been running a NetMon trace on the connection when the file copy fails and what seems to be standing out is that when opening a file that the copy completes on the SMB command looks like this: SMB2: C CREATE (0x5), Name=Training\PDC2008\BB34 Live Services Notifications, Awareness, and Communications.wmv@#422082, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 245376 SMB2: R CREATE (0x5), Context=MxAc, Context=RqLs, Context=DHnQ, Context=QFid, FID=0xFFFFFFFF00000015, Mid = 245376 But for the last file when the copy dialog closes looks like this: SMB2: C CREATE (0x5), Name=gt\files\Media\Training\PDC2008\BB36 FAST Building Search-Driven Portals with Microsoft Office SharePoint Server 2007 and Microsoft Silverlight.wmv@#859374, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 77 SMB2: R , Mid = 77 - NT Status: System - Error, Code = (58) STATUS_OBJECT_PATH_NOT_FOUND The main difference seems to be in the name, one is relative to the open file share, the other has gained the gt\files\media prefix which is the name of the DFS target. These failures are always preceded by logoff and back on of the SMB target. Might have to bump this one to PSS.

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >