Sunday, 8 February 2015

A simple solution to the plague of automated sales calls

The last few years, the problem of automated sales calls, both those that connect you to a salesperson, and lately the much worse ones that play an automated message, has got out of hand. There is a very simple solution - it does require legislation, and a little technical implementation, but it exists:

Step one:

legislate cold calling as an offence, particularly calls with a recorded message. 1000 GBP per offence strikes me as a nice sized penalty. Place the onus on the caller to prove that the recipient has opted in to their mailing list.

"But", I hear you protest, "we have legislation to prosecute cold callers already and it never gets enforced. The caller just withholds their number or provides a bogus one".

That's where step two comes in:
introduce an offence - routing a call where caller ID isn't presented. Large penalties on the telecoms companies for failing to provide accurate caller ID information on every call. Domestic subscribers don't see this, but multi-line commercial phone connections include the facility to set up the caller ID. The subscriber chooses what caller ID to present to public view. That's why you get calls from "12340 000000" etc.

There are a few possible objections:

"It's important to allow subscribers to withhold their number to protect their privacy" - ok, so if a domestic subscriber requests to withhold their number, present a separate number that won't connect to them, but is nevertheless registered with their phone company to their name.

"Sometimes we don't have the caller ID information" - the telecoms companies always have an origin of the call - otherwise, how would they know who to bill? It's a fairly trivial software problem to present it. If it's a call coming in from outside their network (like an international call), they can present a number for the telecoms company routing the call. We'd soon get to know who the spammers were, and start blocking them. If XYZ telecom is the origin of 90% of my spam phone calls, I'm going to block it whether any of my friends use it or not, then suggest to them that they move providers.

"It's too technically complex" - not true, see above.

"It's too expensive" - really not true, see above.



A year or two of that, and we'd soon see an end to

"You may be entitled to a refund on your PPI payments....."






Thursday, 16 October 2014

Freeswitch WTF?

Freeswitch is apparently 'very NAT friendly'.

No kidding. I've spent most of today trying to stop it from binding to the outside of the firewall and allowing anyone and everyone to try and authenticate, filling my logfiles with shite.

-nonat -nonatmap - doesn't appear to work, on windows at least. Nor does any combination of them.

commenting out every instance of

<param name="ext-rtp-ip" value="auto-nat"/>
<param name="ext-sip-ip" value="auto-nat"/>

doesn't work

<param name="ext-rtp-ip" value="192.168.255.255"/>
<param name="ext-sip-ip" value="192.168.255.255"/>

doesn't work.

At the moment, I just want to use this thing internally as a test switch. I'm spending a *lot* of time trying to lock down the default config, instead of debugging my code. This is not good, and inside a private network it should not be necessary.

Addendum:

I raised the question (extremely politely) on the freeswitch users mailing list, and got this response from one of the projects senior members:

"It has NOTHING at all to do with the ext-sip-ip and ext-rtp-ip settings, If

you don't want outside access then block it at your nat/firewall."

That's all - no "Hi", no sign off, just that. Rude, aggressive, shouty and unhelpful.

I'm using Asterisk now. Ho hum.





Monday, 29 September 2014

A very short introduction to software testing

1. Introduction

This is intended as a very short introduction to testing, for programming novices and beginner developers without a lot of formal training. Skilled developers will find much to criticise here because I've necessarily simplified things a lot, but they're not the target audience.

2. Why do we test?

We test because bugs are inevitable. Software is complex. Very complex. The number of variables involved in any non-trivial program rapidly becomes impossible to keep a track of, and as soon as you can't conceptualise your entire program mentally throughout it's run cycle, things are going to slip through the cracks.

We test because we don't want those bugs to cause problems.

We also test because we don't want bugs to slow the development cycle down: if you do no testing at all, when the code goes for UAT (see 'types of testing'), the user finds shedloads of bugs, writes them up and you fix them. But that takes a lot more time than you finding most of the bugs as you go along. Plus you look like a pillock if your code is really buggy.


2. Ways of testing

Basic 'try it through the UI' - assuming your code has a UI, just run through it the way the user would. To make sure you're trying every pathway, you need to produce use cases. These are the different ways in which the program will be used: for example, a typical use case for using a 'user profile edit' component might be:

Select user
Click edit
Change 1 or more details
Click save

Don't forget about the 'change your mind' scenarios like:

Select user
Click edit
Change your mind and click cancel.


Automated testing - this is the preferred way of doing things. Most languages have a preferred test framework (jUnit in Java, Test::More in Perl etc). There are advantages and disadvantages to automated testing:

Pros:
  • you only have to write the test once
  • changes to code can be easily checked for knock on effects elsewhere
  • during slow periods in a project, writing tests is a productive use of time
  • automated tools exist to identify which parts of your code are tested and which aren't
Cons:

  • the test suite is more code to maintain
  • Writing a test case usually takes longer than testing the UI, when under time pressure it can be easier just to test the UI
  • Creating and maintaining test data can be a pain - databases usually need to be 'mocked', as do network services, which is extra overhead.
  • Some thought needs to go into designing your test suite if you don't want to end up with tons of cut and paste code, this takes time.
Generally, I find that automated testing results in better, more reliable code and fewer bugs reaching regression testing (see below) but does often take up more time. In a project where the user / project management wants the moon on a stick, yesterday if not sooner, it can be difficult to find time to build proper test suites. This is one of the many reasons why projects run that way tend to result in low quality code (note to project managers - setting short deadlines is not an effective way of getting the most out of a development team. Go read 'The Mythical Man Month'. Do it now).

3. Types of testing

Smoke - literally 'switch it on, does it catch fire'. Compile the code, run it, see if any exceptions are thrown.

Unit - testing the component you've been working on.

Integration - test the integration of the component with the rest of the software suite, make sure that you haven't buggered anything elsewhere in the suite by this set of changes.

Functional - test the functionality of the program. Specifically, test the functionality you have been working on in this development cycle.

Regression - test everything, to make sure that the whole suite works.

UAT - user acceptance testing: performed by the customer/consumer to ensure that they are satisfied with the product.


4. Test Driven Development

TDD is a way of producing very high quality code in a mature development environment. Under TDD, the tests are developed first, from the design documentation. The functional code is then written to fulfil the tests. That way, you're absolutely certain that your code matches the design.


5. Conclusion

Structured formal testing is a vital part of professional development, and testing skills are an integral part of what makes a good developer. Good code testing skills will make a huge difference to the quality of your code.

Tuesday, 16 September 2014

Threads and AsyncTasks - a quick note

Today I discovered that very long running network operations completely bugger your android UI thread if run in AsyncTasks, and really need to be pushed out to a proper java thread.

Hopefully that will solve the UI crashing during data refresh.

Friday, 25 July 2014

Xstream and android 4.1.1

There seems to be a problem with xstream, and versions of android earlier than 4.3. XML elements bigger than a certain size don't seem to be extracting - all the xml except base64 encoded photos extracts cleanly.

I don't badly need to solve this problem at the moment, so I'm just recording it here. If I find that I need to solve it, I'll update.