Search Results

Search found 1210 results on 49 pages for 'positive'.

Page 32/49 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • How do you take into account usability and user requirements for your application?

    - by voroninp
    Our team supports BackOffice application: a mix of WinForm and WPF windows. (about 80 including dialogs). Really a kind of a Swiss Army Knife. It is used by developers, tech writers, security developers, testers. The requirements for new features come quite often and sometimes we play Wizard of Oz to decide which GUI our users like the most. And it usually happens (I admit it can be just my subjective interpretation of the reality) that one tiny detail giving the flavor of good usability to our app requires a lot of time. This time is being spent on 'fighting' with GUI framework making it act like we need. And it very difficult to make estimations for this type of tasks (at least for me and most members of our team). Scrum poker is not a help either. Management often considers this usability perfectionism to be a waste of time. On the other hand an accumulated affect of features where each has some little usability flaw frustrates users. But the same users want frequent releases and instant bug fixes. Hence, no way to get the positive feedback: there is always somebody who is snuffy. I constantly feel myself as competing with ourselves: more features - more bugs/tasks/architecture. We are trying to outrun the cart we are pushing. New technologies arrive and some of them can potentially help to improve the design or decrease task implementation time but these technologies require learning, prototyping and so on. Well, that was a story. And now is the question: How do you balance between time pressure, product quality, users and management satisfaction? When and how do you decide to leave the problem with not a perfect but to some extent acceptable solution, how often do you make these decisions? How do you do with your own satisfaction? What are your priorities? P.S. Please keep in mind, we are a BackOffice team, we have neither dedicated technical writer nor GUI designer. The tester have joined us recently. We've much work to do and much freedom concerning 'how'. I like it because it fosters creativity but I don't want to become too nerdy perfectionist.

    Read the article

  • Accounting for waves when doing planar reflections

    - by CloseReflector
    I've been studying Nvidia's examples from the SDK, in particular the Island11 project and I've found something curious about a piece of HLSL code which corrects the reflections up and down depending on the state of the wave's height. Naturally, after examining the brief paragraph of code: // calculating correction that shifts reflection up/down according to water wave Y position float4 projected_waveheight = mul(float4(input.positionWS.x,input.positionWS.y,input.positionWS.z,1),g_ModelViewProjectionMatrix); float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; projected_waveheight = mul(float4(input.positionWS.x,-0.8,input.positionWS.z,1),g_ModelViewProjectionMatrix); waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; reflection_disturbance.y=max(-0.15,waveheight_correction+reflection_disturbance.y); My first guess was that it compensates for the planar reflection when it is subjected to vertical perturbation (the waves), shifting the reflected geometry to a point where is nothing and the water is just rendered as if there is nothing there or just the sky: Now, that's the sky reflecting where we should see the terrain's green/grey/yellowish reflection lerped with the water's baseline. My problem is now that I cannot really pinpoint what is the logic behind it. Projecting the actual world space position of a point of the wave/water geometry and then multiplying by -.5f, only to take another projection of the same point, this time with its y coordinate changed to -0.8 (why -0.8?). Clues in the code seem to indicate it was derived with trial and error because there is redundancy. For example, the author takes the negative half of the projected y coordinate (after the w divide): float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; And then does the same for the second point (only positive, to get a difference of some sort, I presume) and combines them: waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; By removing the divide by 2, I see no difference in quality improvement (if someone cares to correct me, please do). The crux of it seems to be the difference in the projected y, why is that? This redundancy and the seemingly arbitrary selection of -.8f and -0.15f lead me to conclude that this might be a combination of heuristics/guess work. Is there a logical underpinning to this or is it just a desperate hack? Here is an exaggeration of the initial problem which the code fragment fixes, observe on the lowest tessellation level. Hopefully, it might spark an idea I'm missing. The -.8f might be a reference height from which to deduce how much to disturb the texture coordinate sampling the planarly reflected geometry render and -.15f might be the lower bound, a security measure.

    Read the article

  • Are You Meeting Social Customer Service Expectations?

    - by Mike Stiles
    Whether it’s B2B or B2C, one sure path to repeat business is making sure your buyer has a memorably pleasant and successful customer service experience with you. If they get that kind of treatment consistently, that’s called a relationship. And those aren’t broken easily. Social customer service, driven by integrated SRM (social relationship management) technology, is the venue that can effectively connect customers not only to the brand, but to other customers. Positive experiences, once administered, don’t just rest with the recipient. They’re published in the form of public raves and peer-to-peer recommendation, a force far more actionable than push advertising. What’s more, your customers have come to expect access to you and satisfaction from you using social. An NM Incite study shows 83% of Twitter users and 71% of Facebook users expect to get an answer from brands the same day they post to them on their social assets. To make sure you’re responding, you’ve got to have a tech platform that’s set up to moderate and alert so you’ll know ASAP a customer needs help. The more integrated your social enterprise is, the faster you can not only respond, but respond with the answer they’re looking for, because your system is connected to the internal resources that can surface the answer or put wheels in motion to rectify the situation in the shortest amount of time possible. But if you go to the necessary lengths to make sure your customers feel valued and important, will they really reward you? The study says 71% of consumers who got quick and effective responses from companies they contacted via social were more likely to recommend the brand to their friends and followers. So yes, sweeping people off their feet pays big dividends in terms of word-of-mouth marketing. But you should be keenly aware of the reverse side of that coin. Give people a negative experience, either in real world or virtual customer service, and that message is highly likely to get amplified through social channels faster and louder. Only 36% of the NM Incite study’s respondents reported that their problems were solved quickly and effectively. 36%? That’s hardly an impressive number. It gets worse. 10% never got so much as a response - at all. Going back to the relationship analogy, companies that are this deep in the ditch where customer service is concerned are making their girl or boyfriends really easy for a competitor to steal. Given the technology tools and data available right now for having an intimate knowledge of the customer, what products they’ve purchased, likely problems with those products, effective resolutions to those problems, and follow-up communication to gauge satisfaction, there are fewer excuses than ever for making the lifeblood of your business feel like you couldn’t care less. @mikestiles

    Read the article

  • How to do proper Alpha in XNA?

    - by Soshimo
    Okay, I've read several articles, tutorials, and questions regarding this. Most point to the same technique which doesn't solve my problem. I need the ability to create semi-transparent sprites (texture2D's really) and have them overlay another sprite. I can achieve that somewhat with the code samples I've found but I'm not satisfied with the results and I know there is a way to do this. In mobile programming (BREW) we did it old school and actually checked each pixel for transparency before rendering. In this case it seems to render the sprite below it blended with the alpha above it. This may be an artifact of how I'm rendering the texture but, as I said before, all examples point to this one technique. Before I go any further I'll go ahead and paste my example code. public void Draw(SpriteBatch batch, Camera camera, float alpha) { int tileMapWidth = Width; int tileMapHeight = Height; batch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, null, camera.TransformMatrix); for (int x = 0; x < tileMapWidth; x++) { for (int y = 0; y < tileMapHeight; y++) { int tileIndex = _map[y, x]; if (tileIndex != -1) { Texture2D texture = _tileTextures[tileIndex]; batch.Draw( texture, new Rectangle( (x * Engine.TileWidth), (y * Engine.TileHeight), Engine.TileWidth, Engine.TileHeight), new Color(new Vector4(1f, 1f, 1f, alpha ))); } } } batch.End(); } As you can see, in this code I'm using the overloaded SpriteBatch.Begin method which takes, among other things, a blend state. I'm almost positive that's my problem. I don't want to BLEND the sprites, I want them to be transparent when alpha is 0. In this example I can set alpha to 0 but it still renders both tiles, with the lower z ordered sprite showing through, discolored because of the blending. This is not a desired effect, I want the higher z-ordered sprite to fade out and not effect the color beneath it in such a manner. I might be way off here as I'm fairly new to XNA development so feel free to steer me in the correct direction in the event I'm going down the wrong rabbit hole. TIA

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • Ad-hoc reporting similar to Microstrategy/Pentaho - is OLAP really the only choice (is OLAP even sufficient)?

    - by TheBeefMightBeTough
    So I'm getting ready to develop an API in Java that will provide all dimensions, metrics, hierarchies, etc to a user such that they can pick and choose what they want (say, e.g., dimensions of Location (a store) and Weekly, and the metric Product Sales $), provide their choices to the api, and have it spit out an object that contains the answer to their question (the object would probably be a set of cells). I don't even believe there will be much drill up/down. The data warehouse the APIwill interface with is in a standard form (FACT tables, dimensions, star schema format). My question is, is an OLAP framework such as Mondrian the only way to achieve something akin to ad-hoc reporting? I can envisage a really large Cube (or VirtualCube) that contains most of the dimensions and metrics the user could ever want, which would give the illusion of ad-hoc reporting. The problem is that there is a ton of setup to do (so much XML) to get the framework to work with the data. Further it requires specific knowledge, such as MDX, and even moreso learning the framework peculiars (Mondrian API). Finally, I am not positive it will scale much better than simply making queries against a SQL database. OLAP to me feels like very old technology. Is performance really an issue anymore? The alternative I can think of would be dynamic SQL. If the existing tables in the data warehouse conform to a naming scheme (FACT_, DIM_, etc), or if a very simple config file/ database table containing config information existed that stored which tables are fact tables, which are dimensions, and what metrics are available, then couldn't the api read from that and assembly the appropriate sql query? Would this necessarily be harder than learning MDX, Mondrian (or another OLAP framework), and creating all the cubes? In general, I feel that OLAP is at the same time too powerful (supports drill up/down, complex functions) and outdated and am reluctant to base my architecture on it. However, I am unsure if the alternative(s), such as rolling my own ad-hoc reporting framework using dynamic SQL would remove any complexity while still fulfilling requirements, both functional and non-functional (e.g., scalability; some FACT tables have many millions of rows). I also wonder about other techniques (e.g., hive). Has anyone here tried to do ad-hoc reporting? Any advice? I expect this project to take a pretty long time (3 months min, but probably longer), so I just do not want to commit to an architecture without being absolutely sure of its pros and cons. Thanks so much.

    Read the article

  • Virtual SMTP not sending mails

    - by DoStuffZ
    Hi I have been googling for the better part of the last two hours without finding any conclusion. My mails are not being sent from the production webserver. If I stop/start the Virtual SMTP I get this in the event log: No usable TLS server certificate for SMTP virtual server instance '1' could be found. TLS will be disabled for this virtual-server. We recently updated the webapplication running and I assume something went amiss during that. Googling the message straight up gave me a list that just as well could have been in greek. I found a security certificate on the server, installing that gave no change. I basicly played russian roulette with the certificate file (.cer), though I was somewhat certain it would not have a negative effect. (Russian roulette with a 6 chamber gun and 2 bullets.) I found a .pfx in our local documentation folder, though I'm far from certain that to have a positive effect. (6 chambers and 5 bullets). I found a site describing how the Virtual SMTP - Properties - Access - should have a button saying Certificates. I have a text saying "Did not find any TLS certificates" and a grayed out tick box saying "Require TLS certificate". I found the TLS being SSL ver3.1+ (3.1-3.3). So question goes - How do I enable the SMTP to once again send emails, like before.

    Read the article

  • Backup program for Windows using non-proprietary format?

    - by Cristi Diaconescu
    I'm looking at the various local backup programs for windows, and I was wondering which of them use a non-proprietary backup format? By non-proprietary, I mean I want to be able to access at least the latest version of the backed up files either directly, or by using an open-standard format like zip/7z/rdiff... The other thing I'm looking for in a backup program is the ability to create incremental backups. What I have found so far: SyncBack copies files as-is, using separate directories for versioning pretty much the same for all the 'roll you own' task scheduler + rsync/xcopy32/robocopy/MS SyncToy/etc solutions GFI Backup appears to be using Zip files, at least in their 'Business' version, not sure about the free 'Home' version. Didn't try it yet, but it's next on my list. Mozy (!) supports local backup starting with v 2.0 and basically provides a 2nd local copy on a separate partition. Subjectively, it feels slow and resource intensive (I think it took more than a week to finish the first local backup of ~ 300 GB), and does not appear to offer file versioning (arguably, you can get older file versions online). On the positive side, it looks like the local backup is integrated in the restore process which was traditionally a masochistic experience (and this goes for any online backup provider). Other suggestions? I favor ease of use over tons of options (e.g. SyncBack is very flexible but it offers sooo many ways to shoot yourself in the foot...)

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) What else is out there? Are you in a similar situation? What do you use?

    Read the article

  • I need advices: small memory footprint linux mail server with spam filtering

    - by petermolnar
    I have a VPS which is originally destined to be a webserver but some minimal mail capabilities are needed to be deployed as well, including sending and receiving as standalone server. The current setup is the following: Postfix reveices the mail, the users are in virtual tables, stored in MySQL on connection all servers are tested with policyd-weight service against some DNSBLs all mail is runs through SpamAssassin spamd with the help of spamc client the mail is then delivered with Dovecot 2' LDA (local delivery agent), virtual users as well As you saw... there's no virus scanner running, and that's for a reason: clamav eats all the memory possible and also, virus mails are all filtered out with this setup (I've tested the same with ClamAV enabled for 1,5 years, no virus mail ever got even to ClamAV) I don't use amavisd and I really don't want to. You only need that monster if you have plenty of memory and lots of simultaneous scanners. It's also a nightmare to fine tune by hand. I run policyd-weight instead of policyd and native DNSBLs in postfix. I don't like to send someone away because a single service listed them. Important statement: everything works fine. I receive very small amount of spam, nearly never get a false positive and most of the bad mail is stopped by policyd-weight. The only "problem" that I feel the services at total uses a bit much memory alltogether. I've already cut the modules of spamassassin (see below), but I'd really like to hear some advices how to cut the memory footprint as low as possible, mostly: what plugins SpamAssassin really needs and what are more or less useless, regarding to my current postfix & policyd-weight setup? SpamAssassin rules are also compiled with sa-compile (sa-update runs once a week from cron, compile runs right after that) These are some of the current configurations that may matter, please tell me if you need anything more. postfix/master.cf (parts only) dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/bin/spamc -e /usr/lib/dovecot/deliver -d ${recipient} -f {sender} postfix/main.cf (parts only) smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_invalid_hostname, reject_non_fqdn_hostname, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_pipelining, reject_unauth_destination, check_policy_service inet:127.0.0.1:12525, permit policyd-weight.conf (parts only) $REJECTMSG = "550 Mail appeared to be SPAM or forged. Ask your Mail/DNS-Administrator to correct HELO and DNS MX settings or to get removed from DNSBLs"; $REJECTLEVEL = 4; $DEFER_STRING = 'IN_SPAMCOP= BOGUS_MX='; $DEFER_ACTION = '450'; $DEFER_LEVEL = 5; $DNSERRMSG = '450 No DNS entries for your MTA, HELO and Domain. Contact YOUR administrator'; # 1: ON, 0: OFF (default) # If ON request that ALL clients are only checked against RBLs $dnsbl_checks_only = 0; # 1: ON (default), 0: OFF # When set to ON it logs only RBLs which affect scoring (positive or negative) $LOG_BAD_RBL_ONLY = 1; ## DNSBL settings @dnsbl_score = ( # host, hit, miss, log name 'dnsbl.ahbl.org', 3, -1, 'dnsbl.ahbl.org', 'dnsbl.njabl.org', 3, -1, 'dnsbl.njabl.org', 'dnsbl.sorbs.net', 3, -1, 'dnsbl.sorbs.net', 'bl.spamcop.net', 3, -1, 'bl.spamcop.net', 'zen.spamhaus.org', 3, -1, 'zen.spamhaus.org', 'pbl.spamhaus.org', 3, -1, 'pbl.spamhaus.org', 'cbl.abuseat.org', 3, -1, 'cbl.abuseat.org', 'list.dsbl.org', 3, -1, 'list.dsbl.org', ); # If Client IP is listed in MORE DNSBLS than this var, it gets REJECTed immediately $MAXDNSBLHITS = 3; # alternatively, if the score of DNSBLs is ABOVE this level, reject immediately $MAXDNSBLSCORE = 9; $MAXDNSBLMSG = '550 Az levelezoszerveruk IP cime tul sok spamlistan talahato, kerjuk ellenorizze! / Your MTA is listed in too many DNSBLs; please check.'; ## RHSBL settings @rhsbl_score = ( 'multi.surbl.org', 4, 0, 'multi.surbl.org', 'rhsbl.ahbl.org', 4, 0, 'rhsbl.ahbl.org', 'dsn.rfc-ignorant.org', 4, 0, 'dsn.rfc-ignorant.org', # 'postmaster.rfc-ignorant.org', 0.1, 0, 'postmaster.rfc-ignorant.org', # 'abuse.rfc-ignorant.org', 0.1, 0, 'abuse.rfc-ignorant.org' ); # skip a RBL if this RBL had this many continuous errors $BL_ERROR_SKIP = 2; # skip a RBL for that many times $BL_SKIP_RELEASE = 10; ## cache stuff # must be a directory (add trailing slash) $LOCKPATH = '/var/run/policyd-weight/'; # socket path for the cache daemon. $SPATH = $LOCKPATH.'/polw.sock'; # how many seconds the cache may be idle before starting maintenance routines #NOTE: standard maintenance jobs happen regardless of this setting. $MAXIDLECACHE = 60; # after this number of requests do following maintenance jobs: checking for config changes $MAINTENANCE_LEVEL = 5; # negative (i.e. SPAM) result cache settings ################################## # set to 0 to disable caching for spam results. To this level the cache will be cleaned. $CACHESIZE = 2000; # at this number of entries cleanup takes place $CACHEMAXSIZE = 4000; $CACHEREJECTMSG = '550 temporarily blocked because of previous errors'; # after NTTL retries the cache entry is deleted $NTTL = 1; # client MUST NOT retry within this seconds in order to decrease TTL counter $NTIME = 30; # positve (i.,e. HAM) result cache settings ################################### # set to 0 to disable caching of HAM. To this number of entries the cache will be cleaned $POSCACHESIZE = 1000; # at this number of entries cleanup takes place $POSCACHEMAXSIZE = 2000; $POSCACHEMSG = 'using cached result'; #after PTTL requests the HAM entry must succeed one time the RBL checks again $PTTL = 60; # after $PTIME in HAM Cache the client must pass one time the RBL checks again. #Values must be nonfractal. Accepted time-units: s, m, h, d $PTIME = '3h'; # The client must pass this time the RBL checks in order to be listed as hard-HAM # After this time the client will pass immediately for PTTL within PTIME $TEMP_PTIME = '1d'; ## DNS settings # Retries for ONE DNS-Lookup $DNS_RETRIES = 1; # Retry-interval for ONE DNS-Lookup $DNS_RETRY_IVAL = 5; # max error count for unresponded queries in a complete policy query $MAXDNSERR = 3; $MAXDNSERRMSG = 'passed - too many local DNS-errors'; # persistent udp connection for DNS queries. #broken in Net::DNS version 0.51. Works with Net::DNS 0.53; DEFAULT: off $PUDP= 0; # Force the usage of Net::DNS for RBL lookups. # Normally policyd-weight tries to use a faster RBL lookup routine instead of Net::DNS $USE_NET_DNS = 0; # A list of space separated NS IPs # This overrides resolv.conf settings # Example: $NS = '1.2.3.4 1.2.3.5'; # DEFAULT: empty $NS = ''; # timeout for receiving from cache instance $IPC_TIMEOUT = 2; # If set to 1 policyd-weight closes connections to smtpd clients in order to avoid too many #established connections to one policyd-weight child $TRY_BALANCE = 0; # scores for checks, WARNING: they may manipulate eachother # or be factors for other scores. # HIT score, MISS Score @client_ip_eq_helo_score = (1.5, -1.25 ); @helo_score = (1.5, -2 ); @helo_score = (0, -2 ); @helo_from_mx_eq_ip_score= (1.5, -3.1 ); @helo_numeric_score= (2.5, 0 ); @from_match_regex_verified_helo= (1,-2 ); @from_match_regex_unverified_helo = (1.6, -1.5 ); @from_match_regex_failed_helo = (2.5, 0 ); @helo_seems_dialup = (1.5, 0 ); @failed_helo_seems_dialup= (2, 0 ); @helo_ip_in_client_subnet= (0,-1.2 ); @helo_ip_in_cl16_subnet = (0,-0.41 ); #@client_seems_dialup_score = (3.75, 0 ); @client_seems_dialup_score = (0, 0 ); @from_multiparted = (1.09, 0 ); @from_anon= (1.17, 0 ); @bogus_mx_score = (2.1, 0 ); @random_sender_score = (0.25, 0 ); @rhsbl_penalty_score = (3.1, 0 ); @enforce_dyndns_score = (3, 0 ); spamassassin/init.pre (I've put the .pre files together) loadplugin Mail::SpamAssassin::Plugin::Hashcash loadplugin Mail::SpamAssassin::Plugin::SPF loadplugin Mail::SpamAssassin::Plugin::Pyzor loadplugin Mail::SpamAssassin::Plugin::Razor2 loadplugin Mail::SpamAssassin::Plugin::AutoLearnThreshold loadplugin Mail::SpamAssassin::Plugin::MIMEHeader loadplugin Mail::SpamAssassin::Plugin::ReplaceTags loadplugin Mail::SpamAssassin::Plugin::Check loadplugin Mail::SpamAssassin::Plugin::HTTPSMismatch loadplugin Mail::SpamAssassin::Plugin::URIDetail loadplugin Mail::SpamAssassin::Plugin::Bayes loadplugin Mail::SpamAssassin::Plugin::BodyEval loadplugin Mail::SpamAssassin::Plugin::DNSEval loadplugin Mail::SpamAssassin::Plugin::HTMLEval loadplugin Mail::SpamAssassin::Plugin::HeaderEval loadplugin Mail::SpamAssassin::Plugin::MIMEEval loadplugin Mail::SpamAssassin::Plugin::RelayEval loadplugin Mail::SpamAssassin::Plugin::URIEval loadplugin Mail::SpamAssassin::Plugin::WLBLEval loadplugin Mail::SpamAssassin::Plugin::VBounce loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody spamassassin/local.cf (parts) use_bayes 1 bayes_auto_learn 1 bayes_store_module Mail::SpamAssassin::BayesStore::MySQL bayes_sql_dsn DBI:mysql:db:127.0.0.1:3306 bayes_sql_username user bayes_sql_password pass bayes_ignore_header X-Bogosity bayes_ignore_header X-Spam-Flag bayes_ignore_header X-Spam-Status ### User settings user_scores_dsn DBI:mysql:db:127.0.0.1:3306 user_scores_sql_password user user_scores_sql_username pass user_scores_sql_custom_query SELECT preference, value FROM _TABLE_ WHERE username = _USERNAME_ OR username = '$GLOBAL' OR username = CONCAT('%',_DOMAIN_) ORDER BY username ASC # for better speed score DNS_FROM_AHBL_RHSBL 0 score __RFC_IGNORANT_ENVFROM 0 score DNS_FROM_RFC_DSN 0 score DNS_FROM_RFC_BOGUSMX 0 score __DNS_FROM_RFC_POST 0 score __DNS_FROM_RFC_ABUSE 0 score __DNS_FROM_RFC_WHOIS 0 UPDATE 01 As adaptr advised I remove policyd-weight and configured postfix postscreen, this resulted approximately -15-20 MB from RAM usage and a lot faster work. I'm not sure it's working at full capacity but it seems promising.

    Read the article

  • Atlassian Crucible very slow on large repository

    - by Mitch Lindgren
    Hi everyone, My company has been running a trial of Atlassian Crucible for some months now. For repositories where it's working properly, users have given very positive feedback about the tool. The problem I'm having is that we have several different projects, each with its own repository, and some of those repositories are very large. One repository in particular has a large number of branches and probably around 9,000 files per branch. Browsing that repository in Crucible is extremely slow. Crucible is running on a CentOS VM. The VM has 4GB of RAM, and I've set Crucible's maximum at 3GB, of which it is currently using 2GB. I've brought this up in a support ticket with Atlassian, and they suggested the following: In particular because you have a rather large SVN repository you will likely find that Fisheye will be creating a large index file on disk. To help improve performance a few things you can try are: Increasing the available memory available to Fisheye (see the document above). Migrating to an external database: confluence.atlassian.com/display/FISHEYE/Migrating+to+an+External+Database Excluding files and directories from your index that aren't needed: confluence.atlassian.com/display/FISHEYE/Allow+(Process) (Sorry for not hyperlinking; don't have the rep.) I've tried all of these things to an extent, but so far none have helped greatly. I was originally running Crucible on a Windows box with 2GB of RAM using the built in HSQL DB. Moving to MySQL on CentOS saw a performance increase for some repositories, and made Crucible much more stable, but did not seem to help much with our biggest repository. There are only so many files/branches I can exclude from indexing while maintaining the tool's usefulness. That being the case, does anyone have any tips on how to speed up Crucible on large repositories, without investing in insanely powerful hardware? Thanks! Edit: To clarify, since I didn't mention it explicitly above, I am using FishEye.

    Read the article

  • Are web service handler chains possible under IIS / ASP.NET

    - by Mike
    I'm working with a client who wants me to implement a particular design in an IIS/ASP.NET environment. This design was already implemented in Java, but I am not sure it is possible using Microsoft technologies. In a Tomcat/Java environment one can create so call Handler Chains. In essence a handler runs on the server on which the web service is running and it intercepts the SOAP message coming to the web service. The handler can perform a number of tasks before passing control to the web service. Some of these tasks may refer to authentication and authorization. Moreover, one can create handler chains, such that the handlers can run in a particular sequence before passing control to the web service. This is a very elegant solution, as certain aspects of authentication and authorization can be automatically performed, without the developer of the client application and of the web service having to invest anything in it. The code for the client application and the web service is not affected. You may find a number of articles on internet on this subject by searching on Google for "web service handler chain". I performed searches for web service handlers in IIS or ASP.NET. I get some hits, but apparently handlers in IIS have another meaning than that described above. My question therefor is: Can handler chains (as available in Java and Tomcat) be created in IIS? If so, how (any article, book, forum...)? Either a negative or a positive answer will be greatly appreciated. Mike

    Read the article

  • On Windows XP, Start > Run "My Documents" sometimes doesn't work

    - by Clayton Hughes
    On all of my home computers, I can enter "my documents" into the Start Run prompt and the My Documents folder of the current profile will open up. What's more, I can continue typing subfolders, files, etc. and auto-complete works and it's smart and enjoyable. I can't check at the moment, but I'm almost positive entries like "My Pictures" and "My Music" also go to their correct folders. On my work computers, if I enter "my documents" into the Start Run prompt, I get the following error: "Windows cannot find 'my'. Make sure you typed the name correctly, and then try again. To search for a file, click the Start Button, and then click Search." I can sort of circumvent this by creating a shortcut in my PATH named 'my' that points to My Documents folder, but this doesn't solve the auto-complete option (and it's otherwise imperfect, of course, because "my pictures" or "my music" all direct to the same place. A google search doesn't provide much help on this, although it does identify a poster in 2007 with this same question at another board: http://www.msfn.org/board/lofiversion/index.php/t124813.html (Login required, but Google cache available here: http://preview.tinyurl.com/ygxhwwl) Is this just a limitation of the networks belonging to a domain, or is there some way I can get this functionality back? My documents folder does live in the standard place (C:\Documents and Settings{username}\My Documents), and not on a network drive or anything. It's probably worth adding that the computers are part of some freakish Novell domain thing, too. I'm not in IT here so I'm not too up on the details. Thanks for any help/suggestions!

    Read the article

  • How do I troubleshoot a slow hard drive?

    - by Bruce Connor
    My computer is suffering of slow-downs and I'm not surprised (it's around 6 years old). Here's what I've verified: They are not very frequent (only a couple of times a day). When they happen a single application will hang for 10-60 seconds, while the rest don't hang but also get slow. Even as it is happening, the CPU usage stays low. It happens to applications (such as text editor, firefox, skype). It never happens to some applications (such as games) which I use for hours under heavy CPU load. Also of note: The Graphics card and PSU are new (around a year). Though I have a decent amount of software installed right now, this was happening even right after I reinstalled Windows. This HDD has been through many partinioning schemes, and a few heavy operations (such as moving around 200GB of data). Because of the above, I am already 70% sure the problem is with the hard drive. Before I replace it, however, I want to rule out other less likely possibilities (such as RAM, software, or PSU). I don't have the money to replace the entire box right now, but I can easily replace one of the components. I've read several questions (such as this one) which give general guidance on troubleshooting an unknown issue, that is not what I'm looking for here. My main question is: What tests or benchmarks can I run to verify I have a problematic hard drive? I don't need to solve this problem, I am content with just making sure it's the hard drive. I could borrow a newer hard drive from a friend and see if it gets better. A positive result would rule out all other components, but it wouldn't rule out a software issue (since this new hard drive won't have any of the software I use daily). Running on Windows/Linux.

    Read the article

  • Joining an Ubuntu 14.04 machine to active directory with realm and sssd

    - by tubaguy50035
    I've tried following this guide to set up realmd and sssd with active directory: http://funwithlinux.net/2014/04/join-ubuntu-14-04-to-active-directory-domain-using-realmd/ When I run the command realm –verbose join domain.company.com –user-principal=c-u14-dev1/[email protected] –unattended everything seems to connect. My sssd.conf looks like the following: [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 [sssd] domains = DOMAIN.COMPANY.COM config_file_version = 2 services = nss, pam [domain/DOMAIN.COMPANY.COM] ad_domain = DOMAIN.COMPANY.COM krb5_realm = DOMAIN.COMPANY.COM realmd_tags = manages-system joined-with-adcli cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%d/%u access_provider = ad My /etc/pam.d/common-auth looks like this: auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_sss.so use_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_cap.so However, when I try to SSH into the machine with my active directory user, I see the following in auth.log: Aug 21 10:35:59 c-u14-dev1 sshd[11285]: Invalid user nwalke from myip Aug 21 10:35:59 c-u14-dev1 sshd[11285]: input_userauth_request: invalid user nwalke [preauth] Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_krb5(sshd:auth): authentication failure; logname=nwalke uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): check pass; user unknown Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname user=nwalke Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): received for user nwalke: 10 (User not known to the underlying authentication module) Aug 21 10:36:12 c-u14-dev1 sshd[11285]: Failed password for invalid user nwalke from myip port 34455 ssh2 What do I need to do to allow active directory users the ability to log in?

    Read the article

  • Win7 Domain User Profile- Desktop Icon management best practices request

    - by Doltknuckle
    Here's the situation: We have a large (5,000+ user) organization that is currently using folder redirection to manage the windows desktop icons. This folder is redirected to a network share where we can centrally manage the different sites and such. When a user tries to use a computer when the network is not available, they are unable to use any shortcuts in the Public folder. We only redirect the C:\Users\%username%\Desktop folder. Does anyone have any suggestions of how to go about managing desktop icons? We still want a central location to manage these items, but find a way to keep the system working when the network is unavailable. As a point of clarification, the network rarely goes down. We do have instances where a few computers do not have a network connection. Usually, something is simply unplugged. Since we have multiple sites, the line from a branch to the central office has gone down a few times. This is more of an attempt to maintain a positive end user experience when disconnected from the network.

    Read the article

  • Webcam becomes "Unknown Device" after Windows Messenger 2011 is installed

    - by Boris
    I have Sony VAIO VGN-NS290J laptop. I installed Windows 7 Ultimate 64-bit. I was able to find drivers for all hardware without any problems. Recently, I installed Microsoft Windows Live Essentials 2011, i.e. Windows Live Messenger 2011. Ever since that application is running on my computer, my webcam is not recognized by the OS any more. It is listed as the "Unknown Device" and placed in the Universal Serial Bus controllers group in the Device Manager. There don't seem to be any drivers for this webcam. It's a standard Sony Motion Eye web camera and Sony does not offer any drivers for it. There is one application to download that utilizes the camera, but there are no drivers (and the system is showing the same behavior regardless of the presence of the application). It happens from time to time that the webcam becomes recognized by the OS again, after a couple of restarts; but not always. Then it becomes unknown again. I am absolutely positive that this issue is caused by the Windows Live Messenger 2011, because same symptoms caused the same effects before. I wish to be able to continue to use this software, but also to use my webcam. I was wondering if anyone had a similar issue and if there is a way to fix it. Thanks for all the help, I appreciate it. Update: I have discovered a pattern - if the camera goes astray, restarting the machine does not bring it back; but switching the computer off and turning it back on does. Every time! This is getting super complicated :)

    Read the article

  • How do I find information about a particular trojan? "W32/Smalltroj.XVGT", as reported by Norman

    - by Lasse V. Karlsen
    I tried checking the Norman antivirus page, Virus-descriptions, but sadly it seems Norman has intentionally obfuscated their search results (I tried clicking on W, and it seems they just list viruses with a W somewhere in the description, instead of more typical, all viruses with a name starting with a W.) Is there a common virus-list somewhere, or is it as I suspect, every antivirus manufacturer is free to come up with their own identification tags for each virus? Several "vshost32.exe" files, related to Microsoft Visual Studio 2008, has been quarantined on our server today, probably related to a test-deployment of some internal software. Some developer machines that have grabbed that latest version of our program has also had the same files quarantined. Now, these files should not have been deployed in the first case, so I'll be looking into that, but whenever any developer now builds a program locally and attempts to debug, the same file is placed in the build output directory, and promptly quarantined. Does anyone have any clues as to how I can go about verifying this before I pointedly ask the antivirus software to go take a hike on this particular virus? Edit: I've copied one of the quarantined files manually to a machine over the network that doesn't have antivirus installed, and compared the file on that machine with a local copy (on that machine) of the vshost32.exe template file, and they're bit-for-bit identical. I guess this is a false positive. I still would like to know if it would be possible for me to verify this in any other way though, since next time such a trojan might be reported in a compiled file that we won't have a pristine copy of.

    Read the article

  • Update git on mac

    - by Meltemi
    I can't remember how I installed git a while back....but now it's living in /usr/bin/git and needs to be updated. I don't care how (pre-compiled or build my own) but what I don't want is another version existing somewhere else. i vaguely remember curl(ing) down the source & compiling it. but not positive. anyway, what's the easiest way to keep Git up-to-date under Mac OS X? Side question: I'm not that familiar with git. once it's installed is it ENTIRELY contained within its directory? so, in my case, everything about git on my machine (excluding the actual code repositories of course) is in /usr/bin/git/ ? If so then can I just move git around with a simple mv -R /usr/bin/git /opt/git? Then update my $PATH and everything should work as before? if so then i supposed i could just install again by any method and to any directory...and then move the new one into /usr/bin replacing the old version?!? Or is this bad?

    Read the article

  • Correct way to set up office network - 8 workstations, a file server and a staging server

    - by naunu
    Our office had this old school windows 2003 domain setup, our server caught fire, and now we are looking to do it right from scratch. Here is what we need: 5 PC and 3 Mac workstations for web development, they will each have WAMP/MAMP setup on them, managed by their developers. We will have a file server for assets, and a LAMP server with an external IP for staging. Here is what we have to work with: 5 IP addresses, brand new PC file server with windows 2008 SE, D-Link DSS-16+ 16 port switch, belkin 5 port wireless router, cable modem with 4 ports. How I have it set up now (this is a temporary makeshift setup): Cable modem = LAMP server, wireless router Wireless router = Switch = All of the workstations and file server (setup as a workgroup). We have noticed our internet is very slow with us all plugged in to the switch, and the switch plugged in to the router. I am not positive, but I think it is because our router does not have NAT. We are also having problems with the MACs connection to the network drive - it keeps disconnecting. I want this done right, and we have a ~$600 budget to buy anything else we need. Does anybody have any advice for me? Should I set up a domain or workgroup?

    Read the article

  • SQL 2008 - db mail issue

    - by Chris
    Hello. I have two instances of SQL Server 2008. One was upgraded from SQL Server 2000 and one was a clean, new install. The instances are running on different nodes of the same cluster, although I have tried having them both on the same node with identical results. SQL Mail operates perfectly on both instances. DB Mail operates perfectly on the newly installed instance. On the upgraded instance, DB Mail does not send any mail. Of course, I am not positive that the fact this instance is upgraded has anything to do with the issue, but it might. The configuration of my db mail profile and account looks identical to my functioning instance. In the configuration of the 'alerts' tab in the SQL Agent properties i have tried selecting both DB Mail and SQL Mail to no avail. Both instances use the same SMTP server with the same authentication (domain with db engine account). All messages sent via sp_send_db mail and those sent via the 'test email' option are visible in the sysmail_allitems queue and remain there as 'unsent'. The send_status eventually changes to 'failed'. The only messages in the sysmail_event_log are 'mail queue stopped by login domain\myuser', 'mail queue started by login domain/myuser' and 'activiation successful.'. selecting from the externalmailqueue has the same number of rows as sysmail_allitems. i have tried bouncing the agent, the entire instance and moving the other functioning instance to the other node in the cluster. any thoughts? thx.

    Read the article

  • check_snmp warning & critical thresholds with negative values

    - by Oesor
    I'm querying some signal level values measured in dBm, and the SNMP host on the remove device reports the values as negative values, ie, -90 dBm. However, check-snmp seems to be incapable of dealing with negative numbers as part of its threshold values. If I specify the values as part of a collection of OIDs, it accepts the syntax but converts the snmp value to positive, thus always generating a WARNING/CRITICAL result: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::AverageReceiveSNR.0,DEVICE-MIB::CurrentNoiseFloor.0 -w 10:,~:-85 -c 15:,~:-80 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::AverageReceiveSNR.0 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::AverageReceiveSNR.0 = INTEGER: 25 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::AverageReceiveSNR.0 response: = INTEGER: 25 Processing line 2 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP CRITICAL - 25 *97* | DEVICE-MIB::AverageReceiveSNR.0=25 DEVICE-MIB::CurrentNoiseFloor.0=97 If I run it with a single OID, it gives me an error that the format is incorrect: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -w ~:-85 -c ~:-80 -vvvv Range format incorrect And if I run it with no thresholds defined, it works properly and returns the right value. This makes the graphs correct, however it'll never generate a notification when out of range: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP OK - -97 | DEVICE-MIB::CurrentNoiseFloor.0=-97 What am I doing wrong here? How would I, for example, generate a CRITICAL when the noise floor is -80 dBm or higher, a WARNING when it's -85 to -80 dBm, and an OK when -85 dBm or lower? Do I have to write my own SNMP plugins when dealing with negative values?

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) edit: similar question here What else is out there? Are you in a similar situation? What do you use?

    Read the article

  • GRUB reporting wrong partition type

    - by plok
    It all started when I had to replace one of the disks that the software RAID 1 on this machine currently uses. From that moment on I have not been able to boot to the Windows XP that is installed on the fourth hard drive, /dev/sdd. I am almost positive that the problem is related not to Windows but to GRUB, as if I unplug all the other hard drives so that the Windows XP disk is now /dev/sda it boots with no problem. The problem seems to be that GRUB detects a wrong partition type, which I understand suggest that something is really messed up. This is what I get when I try to follow the steps that until now had worked like a charm: grub> map (hd0) (hd3) grub> map (hd3) (hd0) grub> root (hd3,0) Filesystem type is ext2fs, partition type 0xfd 0xfd? That doesn't make sense. /dev/sdb and sdc are 0xfd (Linux raid), but not /dev/sdd: edel:~# fdisk -l [...] Disk /dev/sdd: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00048d89 Device Boot Start End Blocks Id System /dev/sdd1 * 1 30400 244187968+ 7 HPFS/NTFS edel:/boot/grub# cat device.map (hd0) /dev/sda (hd1) /dev/sdb (hd2) /dev/sdc (hd3) /dev/sdd I have been trying to work this out for hours, to no avail. Can anyone point me in the right direction?

    Read the article

  • Running Debian as guest operating system on a Hyper-V VM

    - by kce
    Hello. Layer-9 considerations are prompting a migration from Citrix XenServer to Hyper-V as our shop's virtualization platform of choice. This will require me to migrate our existing virtual machines from XenServer to Hyper-V. A hand full of these VMs are running Debian. Unfortunately, Debian does not seem to be on the list of approved/supported guest operating systems. In fact it seems that running Debian as a guest operating system of is rather difficult, although apparently not impossible. I have two interrelated questions: Does anyone have any experience running a Debian guest on Hyper-V? Is it one of those things where it just will not work at all or is more along the lines of "it will probably work fine, but we won't support it". Any experience here, positive or negative, would be helpful. How much of a bad idea is it to deviate from Hyper-V's list of supported guest operating systems? Again, is it either basically asking for Bad Things (TM) to happen or is just another instance of "it will probably work fine, but we won't support it"? Or is it somewhere in the middle? Thank you.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >